Global average temperature increase GISS HadCRU and NCDC compared

by

I made some graphs of global temperature change according to the three major compilations based on measured surface temperatures: GISS, HadCRU and NCDC. They are expressed as the temperature difference (“anomaly”) with respect to the 1901-2000 average as the baseline.

Temperatures jiggle up and down, but the overall trend is up: The globe is warming.

To highlight the long term trend more clearly, below the same figure with in addition the 11 year running mean (which stops 5 years short of each endpoint for lack of data to calculate the mean):

Some people prefer you to only look at the last dozen of years:

Often, the last datapoint (representing 2009) is omitted, and only HadCRU temperatures (in blue) are shown, to create the most visually compelling picture for claiming that “global warming has stopped” or even reversed (“blogal cooling”, pun intended).

If however we look at the trend through the average of the three datasets over the period 1975-2009 (during which greenhouse gas forcing was the dominant driver of climate change), we see the following:

The trend over 1975 to 2009 is approximately the same (0.17 +/- 0.03 degrees per decade) for all three temperature series.

The error represents the 95% confidence interval for the trend, i.e. if you were to repeat the trend analysis a hundred times on the real underlying data, 95 times you would find that the trend is within the range 0.14 to 0.20 degrees per decade.

The thin black lines represent the 95% confidence “predictions bands” for the data: Based on the observed variability, 95% of the data are expected to fall within these lines.

The observed yearly variability in global temperatures (sometimes exceeding 0.2 degrees) is such that 10 years is too short to discern the underlying long term trend (0.17 degrees per decade). There is no sign that the warming trend of the past 35 years has recently stopped or reversed.

 More info:

A major difference between the datasets is that HadCRU omits the arctic (in effect assuming that is warms as the global average), while GISS estimates it by interpolation. I don’t know about NCDC. See also RealClimate and James Hansen.

Similar analysis of GISS, HadCRU and NCDC temperatures up to 2007 by Tamino. Other nifty analyses by Tamino relating to the same theme can be found here, here, here and here.

1998 was a record warm year in large part because of a very strong El Nino event. If the effect of the ENSO cycle is removed, the warming trend becomes even more apparent, see e.g. RealClimate. Other rebuttals of the spurious 1998-claim at SkepticalScience, Coby Beck, Zeke Hausfather, RealClimate, Scott Mandia, Greenfyre (including lots more links) and Peter Sinclair of the denial crock of the week youtube videoseries.

Four independent statisticians were given the data (up to 2008) and asked to look for trends, without being told what the numbers represented. Not surprisingly, they found no evidence of a downward trend. Story retold e.g. here and here.

Robert Grumbine explains the art of cherrypicking and why it is not science.

Update: If you want higher resolution versions of any of the figures here you can email me via the link on the right (under “pages”).

Advertisement

Tags: , , , , , , , , , ,

2,192 Responses to “Global average temperature increase GISS HadCRU and NCDC compared”

  1. Länkar 2010-03-02 Says:

    […] Global average temperature increase GISS HadCRU and NCDC compared […]

  2. Andrew Says:

    Nice graphs; crisp and clear.

    Of course, they only represent surface temperatures (land and water).

    Climate change also involves warming (heating) of subsurface waters, permafrost and land ice. As I recall, the amount of heat energy involved with rising surface temperatures amounts to only about 3% of the total heat change for the globe. That is to say about 97% of the heat from global warming is flowing into subsurface waters, permafrost and land ice.

  3. VS Says:

    Hi Bart,

    Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics.

    Here is a series of Augmented Dickey-Fuller tests performed on temperature series (lag selection on basis of a standard enthropy measure, the SIC), designed to distinguish between deterministic and stochastic trends. This is the first and most essential step in any time series analysis, see for starters Granger’s work at http://nobelprize.org/nobel_prizes/economics/laureates/2003/

    Test resutls:

    ** CRUTEM3, global mean, 1850-2008:
    Level series, ADF test statistic (p-value<):
    -0.329923 (0.9164)
    First difference series, ADF test statistic (p-value<):
    -13.06345 (0.0000)

    Conclusion: I(1)

    ** GISSTEMP, global mean, 1881-2008:
    Level series, ADF test statistic (p-value<):
    -0.168613 (0.6234)
    First difference series, ADF test statistic (p-value<):
    -11.53925 (0.0000)

    Conclusion: I(1)

    ** GISSTEMP, global mean, combined, 1881-2008:
    Level series, ADF test statistic (p-value<): -0.301710 (0.5752)
    First difference series, ADF test statistic (p-value): -10.84587 (0.0000)

    Conclusion: I(1)

    ** HADCRUT, global mean, 1850-2008
    Level series, ADF test statistic (p-value<):
    -1.061592 (0.2597)
    First difference series, ADF test statistic (p-value<):
    -11.45482 (0.0000)

    Conclusion: I(1)

    These results are furthermore in line with the literature on the topic. See the following:

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a 'clear increase' is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no 'trend').

    There is more. Take a look at Beenstock and Reingewertz (2009). They apply proper econometric techniques (as opposed to e.g. Kaufmann, who performs mathematically/statistically incorrect analyses) for the analysis of such series together with greenhouse forcings, solar irradiance and the like (i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent. They, therefore have to be related via more general methods such as polynomial cointegration).

    Any long term relationship between CO2 and global temperatures is rejected. This amounts, at the very least, to a huge red flag.

    Claims of the type you made here are typical of 'climate science'. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics). In this context, much of McIntyre's criticism is valid, however much you guys experience it as 'obstructionism'.

    It would do your discipline well to develop a proper methodology first, and open up all of your methods to external scrutiny by other scientists, before diving head first into global policy consulting.

    PS. Also, even if the temperature series contained a deterministic trend (which it doesn't), your 'interpretation' of the 95% confidence interval is inprecise and misleading, at best. I suggest you brush up on your statistics.

  4. Heiko Gerhauser Says:

    Hi Bart,

    I think that the error bands for temperature need to be quite large, I also think that we have a very poor understanding of how aerosol forcings have changed over time, and therefore how total forcings have changed over time. In addition, there is quite a range of model outputs even for a given forcing history and we also have poor understanding of how total forcing will change.

    Or in other words, it might just be that between now and 2030 extra aerosol forcing will mask quite a bit of warming, or that it won’t and we’ll shoot up by 2C over that period.

    The last ten years of data don’t do a great deal to resolve the degree of masking experienced to date. 1C up or 0.5C down would have.

    Or in other words, they are not evidence against a climate sensitivity of 3C, but neither do they add to the evidence for a climate sensitivity of 3C.

    As for VS’s lengthy comment, I just don’t think it’s that helpful to torture the data with statistics. That won’t tell you whether it confirms or disproves the models, or what temperature will be in 2050.

    If you merely do curve fitting, a sine wave of 30 years and an amplitude of 0.2C, a constant underlying trend of 0.5C per century plus a bit of yearly noise (standard distribution, standard deviation of 0.04C) will do nicely, but of course has no predictive power for 2050 or 2100.

    James Annan and me had a discussion on global change a while back about tipping points. James largely doesn’t buy the idea. I am not so sure myself.

  5. Bart Says:

    Andrew,

    You’re right that of the excess energy in the earth system only a small part goes into warming the atmosphere, whereas the bulk goes into warming the oceans (which expand as a result, which is an important contributor to sea level rise). The deep ocean has continued accumulating heat, sea level rise has continues, Arctic sea ice has disappeared faster than models have predicted: Global change is abundantly clear in the aggregate of observed changes.

  6. Bart Says:

    VS,

    The observed changes in temperature over the past 130 years are no random walk.

    The Augmented Dickey-Fuller tests for the presence of a unit root in an autoregressive process, and my guess is that it’s not automatically applicable to the estimation of a trend in temperature series. Your conclusion of a ‘random walk’ is at odds with the observed changes in climate, both recent (past century) and in the deeper past, and also with what is known about the physics of climate:

    One difficulty with the notion that the global mean temperature behaves like a random walk is that it then would imply a more unstable system with similar hikes as we now observe throughout our history. However, the indications are that the historical climate has been fairly stable.

    And OTOH, based on large shift in climate in the past, there are indications that the climate can be pushed into a certain direction if pushed hard enough. A driving force is needed to provide that push; it doesn’t happen randomly. Plate tectonics, solar output, CO2: They all can be found to be important players in climate changes having occurred in the past, though on different timescales.

    The observed changes fall outside of the bounds of natural variability (i.e. stochastic changes); there must be some contribution of climate drivers (i.e. deterministic changes).

    It is remarkable statistically that the 13 (now 14) warmest years in the modern record have all occurred since 1990. The fact that the 13 warmest years since 1880 could have occurred by accident after 1990 corresponds to a likelihood of no more than 1:10 000.

    How would you estimate that chance?

    RC continues:

    An even more serious problem with (…) the random walk notion is that a hike in the global surface temperature would have physical implications – be it energetic (Stefan-Boltzmann, heat budget) or dynamic (vertical stability, circulation). In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe.

    See also my reply to Andrew about changes in other parts of the climate system.

  7. Bart Says:

    Heiko,

    You’re right, aerosols are the big unknown in the 20th century changes in forcings, and the spread in individual model runs is large indeed (by the eye even larger than the spread in observed temperatures). For those reasons, the observed warming over the 20th century doesn’t provide a very strong constraint on climate sensitivity indeed.

    From emission inventories we do know that the aerosol precursor emissions rose sharply at the middle of the 20th century, and their cooling effect canceling the increasing GHG forcing is likely responsible for the 30 year stable period in global average temperature between the 1940s and 1970s.

    There are some educated guesses about what aerosol emissions will do in the near future (will write about that another time; briefly, over US and EU they will and are already decreasing, resp, whereas over Asia they will probably increase before decreasing later this century)

  8. VS Says:

    “As for VS’s lengthy comment, I just don’t think it’s that helpful to torture the data with statistics. That won’t tell you whether it confirms or disproves the models, or what temperature will be in 2050.”

    Hi Heiko,

    I wouldn’t classify the test results I posted above as ‘torture of the data’; coming from my field, that judgement would be far more applicable to what Mann et al are doing with their endless and statistically unjustified ‘adjustments’ to proxy/instrumental records.

    Posted above is a standard procedure (the first step even) in time series analysis. Given the test results, the calculations resulting in those confidence intervals (that you believe should be wider) are simply meaningless. They are conditional on:

    (1) the series containing a deterministic trend
    (2) that trend being determined by time only

    Obviously, as (1) is clearly rejected, both assertions are false. Ergo, those error bands make no sense.

    As for ‘statistics’ not being able to disprove a model: that’s a novelty for me. The scientific method, as I was taught, involves testing your hypothesis with observations. Statistics is the formal method of assessing what you actually observe.

    Given the hypercomplex and utterly chaotic nature of the Earth’s climate, and the non-experimental nature of observations it generates, I don’t see any other way of verifying/testing a model trying to describe or explain it.

    Here’s an interesting reading, it is an article published in 1944, in Econometrica, dealing with the probabilistic approach to hypothesis testing of economic theory (i.e. also a discipline attempting to model a hypercomplex chaotic system generating non-experimental observations).

    It is written by Trygve Haavelmo, who later received a Nobel Prize in Economics, in part also for this paper.

    Click to access the_probability_approach_in_econometrics.pdf

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’ :)

  9. Heiko Gerhauser Says:

    In IPCC lingo, “likely” is fine. It’s hard to say more than that, if even the present day forcing of aerosols covers such a wide range. While we may have a fair idea of how much coal was burnt in 1960, I think, our knowledge of how much sulphur was in that coal, and what kind of aerosols (size, black or reflecting, residence time in the atmosphere) that led to, must surely be much poorer than for the more recent years, and as said, it’s not exactly a narrow range even for the present.

  10. VS Says:

    Hi Bart (just saw your reply),

    The ‘random walk’ concept is a bit tricky methodologically, and the fellows at RealClimate seem to be taking it too ‘literally’, so allow me to make an attempt to clarify it.

    I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.

    However, we are looking at a very, very, small subset of those observations, namely the past 150 years or so. In this subsample, our record is clearly observed a random walk. For the purpose of statistical inference, it has to be treated as such for any analysis to actually make sense mathematically. Again, simply calculating trends via OLS is meaningless (and as I noted above, those confidence intervals are invalid)

    Statistics is ‘blind’, as it should be, when treating observations. Remember, we are trying to make objective inference.

    Remember that GHG forcings are also observed as a random walk. Now, statistically speaking, not all is lost, and the cointegration approach is well suited to relate these random walk series (it’s beautiful, if you think about it, hence the two Nobel prizes awarded for it so far :).

    Put differently the series can contain a common stochastic trend, which in its turn would imply an error correction mechanism, where the two series are never wondering ‘too far’ from each other. Finding such a link between greenhouse gas forcings and temperatures would be strong evidence for the hypothesized CO2/Temperature relationship (the link to the Nobel lecture by Granger is a good first reading to get you started on cointegration).

    An identified cointegration relationship would allow for proper confidence interval estimation, and any type of ‘forcing’ you describe above. Note also that a correlation (in this case emobdied by the cointegrating relationship) is a necessary, but not sufficient, condition for causation.

    However, when we try to relate them employing proper statistical/econometric methods, any long term relationship is rejected. This amasses to a huge red flag for the validity of any phenomenological model.

    PS. You cited the following from RC:

    “It is remarkable statistically that the 13 (now 14) warmest years in the modern record have all occurred since 1990. The fact that the 13 warmest years since 1880 could have occurred by accident after 1990 corresponds to a likelihood of no more than 1:10 000.”

    This is clear and utter nonsense. That likelihood might (might!!) be correct, if the ‘random walk’ would have somehow referred to levels, and not changes (i.e. first differnces). But it doesn’t.

    In time series analysis, one series is treated as a single sample realization from a given data generating process, so conditional on a given DGP, that probability up there is completely meaningless. Conditional on temperatures reaching their 1990 level, their observed 2000 level is very likely, assuming a random walk DGP.

    It is a bit like throwing a die 100 times (and generating a series of dice tosses). Then adding them up sequentially, where the realizations [1 2 3 4 5 6] are mapped as [-3 -2 -1 1 2 3], and then claiming that the sum at the end having some kind of deivant likelihood… in fact, assuming a random walk DGP, any realization is equally likely (note that the problems with those confidence intervals stems from exactly this).

    If anything (if! ;), that quote from RC actually confirms the random walk nature of temperature series.

  11. Scott Mandia Says:

    VS,

    I will admit my statistics background is essentially Stats I for science majors so I cannot question your analysis. I do have a few questions for you though:

    1) You appear to be hung-up on Mann, Kaufmann, and others whose reconstruction shows a hockey-stick type curve. Do you think the majority of the proxy curves are incorrectly analyzed? If so, I would assume that you would publish the analysis in a well-respected journal. You would also be paid handsomely by quite a few groups who are gunning for Mann and any hockey stick researcher so funding should not be an issue.

    2) How do the statistics explain the following?

    http://www.skepticalscience.com/Senator-Inhofe-attempt-to-distract-from-scientific-realities-of-global-warming.html

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). Curiously enough, CO2 concentrations and increasing rates are essentially “off the charts” historically. No causation?

    Again, this would be a landmark paper for you to publish.

    I am very impressed with your stats discussion but I would be a fan for life if you could publish answers to those two questions above.

  12. Bart Says:

    VS,

    “Remember that GHG forcings are also observed as a random walk.”

    ? How do I square that statement with the very strong increase in CO2 concentrations over the past 150 years? Just from recollection, I think it’s 3 million years ago that the CO2 concentration was as high as or higher than it is today. That sure is a counter intuitive definition of random walk, and has nothing to do with what non-statisticians (like me) would call “random”.

    “Conditional on temperatures reaching their 1990 level, their observed 2000 level is very likely, assuming a random walk DGP.”

    ? That sounds a bit like newspeak, “conditional on”. Sure, conditional on the temperatures reaching previous year’s level, there’s nothing strange with this year’s level, and that statement could be repeated for each year. But the long term trend is up, and in the physical world, such trends towards increasing (or decreasing) temperatures over climatologically relevant timescales do not happen without a reason.

    The 1 in 10,000 chance statement comes from a GRL paper, discussed in the PhysOrg link I provided (not RealClimate). They explain the following:

    “This likelihood (1 in 10,000) can be illustrated by using the game of chance “heads or tails”: the likelihood is the same as 14 heads in a row.”

    In your analogy of throwing a dice 100 times, the sum total of all realizations can be graphically depicted as a probability density function, and values around the center (0 in your mapping, or 350 if the nominal value of the die is taken) are more likely than those deviating far from it (even though every single realization has the same chance of occurrence) The probability density function of the total will look like a bell shaped curve with 350 as the mean (nominal values of the die taken), and a progressively smaller chance for outliers to either side. A one in 10,000 chance is perhaps reached at a total larger than, say 500 of thereabouts (just guessing). It is by all means a very unlikely event.

    But perhaps this comes to the root of the misunderstanding: In climate change, we’re interested in the change over time (the sum total of realizations in the previous analogy), not in any particular yearly value (any particular realization in the previous analogy), which indeed has a very strong ‘random’ component to it if you will (natural weather related variability).

  13. VS Says:

    Hi Mandia,

    You raise interesting issues, let me start with the first one; the reconstructions. Note, that I only referred to Mann’s proxy reconstructions in passing. What I posted above was related to the analysis of the instrumental record, as performed by e.g. Kaufmann. However, given that I grew up ‘academically’ analyzing real data, I can make a few comments.

    I find the ‘sticking on to each other’ of low variance proxy series on high variance instrumental record questionable, to say the least. We can study the general variance of the temperature series from temperature reconstructions based on various ice-core samples, such as the Vostok one. The variance structure is clearly different in the Hockey stick graph, in particular, there is a clear variance break in the series when the instrumental record kicks in. No such ‘break’ is observed in the continuous ice-core sample. The fundamental difference in the series is furthermore clearly shown by the divergence problem (and to be honest, I find the linear ‘divergence’ corrections performed by Briffa et al, to be highly suspect from a statistical and methodological point of view).

    One might assume that because the instrumental record is more precise and reliable, adding it to the less reliable proxy record ‘improves’ the total record.

    However, since econometrics/statistical modeling deals with explaining ‘variance’ (rather than levels, a common misconception), any statistical inference based on two series, with structurally different variances, is in fact invalid. Also, comparing the current instrumental record with the proxy record for the purpose of determining the ‘unprecedentedness’ of the current warming, is invalid.

    Now, I’m not saying that proxy series are useless (that would be stupid, data is data, and imperfect data is better than no data), but they simply cannot be used together with the instrumental record, because the two methods basically measure two different things (i.e. putting it very inprecisely: they have different ‘measurement’ biases).

    In broad lines, I have to side with the 2006 Wegman report on this one.

    As for the second issue, I have to note that statistics in general doesn’t ‘explain’ anything. Statistics is simply a method to formally deal with limited observations, when testing hypotheses. In that sense, I cannot ‘explain’ those findings. I can however test the hypothesis, on the basis of what we observe

    The link you posted stated the following:

    1) CO2 is rising
    2) Most of the rise is anthropogenic
    3) We see an increase in the amount of radiation health by the atmosphere

    Fine, I would say that there is a basis for a hypothesis of warming through CO2 emissions (i.e. a phenomenological model). Ergo, we should then be able to detect the effect of changes in such emissions on changes in temperatures. We have 150 years of proper observations, so something has got to give, right?

    But using the best tools available, we don’t find any proper correlation. In fact, such a relationship is rejected by the data. Now this is a problem for any hypothesis, and if this were an economic (phenomenological) model, it would have suffered a fatal blow by such test results (indeed, many ‘nice’ hypotheses in economics died at the hands of econometricians/statisticians).

    I hope this helps.

    As for the ‘landmark paper’, thanks for the confidence ;)… I’m considering writing something on exactly these topics, but I think I will have to do that in my spare time (I’m in a different field)..

    ——————————-

    Hi Bart,

    “This likelihood (1 in 10,000) can be illustrated by using the game of chance “heads or tails”: the likelihood is the same as 14 heads in a row.”

    This statement is simply wrong, and the fact that it comes from a peer-reviewed study published in the GRL, says more about the quality of the peer review, than it does about the statistical properties of temperature data. Allow me to elaborate:

    The ‘random walk’ component we are talking about (the one we test for) is the change, not the level. Take temperature at time t to be equal to Y(t). Now, if Y(t) follows the simplest version of a random walk, the specification is:

    Y(t)=Y(t-1)+error(t)

    Where the error is independently distributed (independent of itself! not per se other variables/errors, so there is room for relating variables)

    This series is integrated of the first order, or I(1). We can then take the first differences, and obtain a stationary series, D_Y(t)=Y(t)-Y(t-1)=error(t). Now this is the random part where you can apply your bell curve analysis.

    In the context of ‘changes’, tossing (H,H,H,H,H,H,H,H,H,H,H,H,H,H) is just as likely as tossing (H,T,H,T,T,T,H,H,T,T,H,H,T,T) or any other sequence of realizations, the probability of that particular realization being 0.5^14. You are correct to say that, if these tosses represent the changes (map (H,T)->(-1,1)), the expected value of total change from t=1, to t=14 would be, 0. However, the confidence interval of that total change would expand.

    ==================

    Simple illustration:

    The total change is equal to the following: Sum(e_t, t=1:14), where e_t is i.i.d,. with mean 0, and variance sigma(e).

    The expected value is equal to E(Sum of changes)=E(Sum(e_t, t=1:14))=Sum(E(e_t), t=1:14)=Sum(0)=0

    The variance however is equal to Var(Sum of changes)=E((Sum(e_t, t=1…14)-E(Sum(e_t, t=1:14)))^2). Where the second term is the expectation of the sum, which equals 0 (as per above). Eliminating it gives Var(Sum of changes)=E((Sum(e_t, t=1…14))^2).

    Now note that because e_t is i.i.d, E[e_i*e_j]=0 for i unequal to j. So the expression can be simplified to: E(Sum(e_t^2,t=1:14))=Sum(E(e_t^2),t=1:14)=Sum(sigma(e),t=1:14)..

    Now substitute n for 14 in that expression, take the limit of n->Inf, and you see what happens with your confidence interval. Asymptotically (n->Inf) the expected variance of the sum is infinite (hence, we are dealing with a nonstationary series).

    To make a very long story short, seeing a very high temperature level in 2000, starting in 1850, is not at all ‘unlikely’ and inconsistent with a ‘random walk’. I hope you also see now why that GRL comment is nonsense.

    ==================

    As for greenhouse gasses following a random walk, they are I(2), so (again, a simple representation for the sake of exposition) they look something like this:

    G(t)=G(t-1)+eta(t)
    eta(t)=Eta(t-1)+error(t)

    Where the error is independently distributed (same note as above). Note how we have to difference the series twice in order to obtain a stationary series on which we can perform valid statistical analysis.

    You nicely depict my point here:

    “In climate change, we’re interested in the change over time (the sum total of realizations in the previous analogy), not in any particular yearly value (any particular realization in the previous analogy), which indeed has a very strong ‘random’ component to it if you will (natural weather related variability).”

    Indeed, we are interested in the sum of all realizations of changes! (take a look at the I(1)/I(2) series definition above) However, what the GRL quote implies is that you guys take actual temperatures (instead of temperature changes) as realizations coming from a bell shaped curve. In that case, observing a series of oddly high values, would indeed be very ‘unlikely’. So, like I noted above, if these were ‘levels’, the author of the quote might (might!) have a point.

    Note also, that the Augmented Dickey Fuller test is designed to test exactly what you posted above, namely, whether it is likely that the underlying DGP is deterministic (trend) or if the series contains a stochastic trend. In doing so, the entire series is taken into account (the guys didn’t get two Nobel prizes for sloppy work :).

    In any case, the conclusions of these tests are unambiguous.

    PS. If we could simply ‘eyeball’ time series, and come to proper conclusions using our ‘intuition’, what would then be the purpose of statistical testing? :)

    PPS. Hmm, there goes my lunch break…

  14. Scott Mandia Says:

    VS,

    I need to read and re-read that Beenstock paper.

    Physics tells us that CO2 forces climate.
    We have increasing CO2 that is being measured.
    The concentrations today are the highest in the past 650,000 years and likely to be higher than at any time in the past 15 million years.
    The planet is warming.
    Models that use the best physics today can replicate this warming only by using increased GHG forcings. In fact, without GHG forcing we would be cooling.

    Although I admit I do not understand your statistics, it would apper to me that they must be wrong. You will need to explain away the above points if you believe that there is not a significant correlation between T and CO2. Can you do so in English?

    I ask this not be be snide but it is the only way non-experts such as myself can be convinced that science has now been stood on its head. Sorry for my stats ignorance.

  15. VS Says:

    Hi Scott

    Let me try to explain this, indeed, in plain English. Don’t worry, skepticism is fine, and your interest is welcomed (and I’m running some numerical analysis here in the background anyway, so I have some time to spare :)

    First of all, we need to go to properties of the climate models themselves. In particular, we have to agree on one thing: the models are phenomenological, and not fundamental. Putting it differently, they are derived from lab-based experimental (and arguably fundamental) results, such as the ones you named above, as well as phenomena observed (e.g. warming, higher CO2 concentrations etc). They are however not derived from fundamental equations directly.

    These models, therefore, are rough approximations of the hypothesized mechanisms driving global temperatures. For starters, while we do have a general idea of the direction of the influences, since these models are not fundamental, the magnitude of all effects must be estimated. Furthermore, because these are not fundamental models, all model specifications are, in broad lines, opinions of researchers (i.e. they are guided by fundamental results, but are not fundamental themselves).

    The thing with phenomenological models is that they still have to be validated with observations. A necessary condition hereby is a proper correlation. If, after exhausting all of the methods we can think of, we still cannot find this correlation, we should really start questioning the model itself.

    In particular, if we cannot detect any significant warming as a direct result of increased CO2 concentrations, perhaps this effect is negated by other latent forces we are not accounting for in our models. In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.

    Enter empirical testing through econometric methods (i.e. statistical modeling). I’ll try to explain, in as plain English as possible, what Beenstock and Reingerwerz did.

    Let me try to explain the cointegration method first, very shortly, so that you understand what it is that the authors are trying to do. Assume that you have two I(1) series (i.e. first differences are stationary, see above). The implications are that at time t, you have no idea which way the series will move at t+1. However, these two random walks, can have, what is called a common stochastic trend. In other words, these two models might behave randomly for ‘our eyes’, but they do so together, and will not stray from each other in the long run. Speaking in statistical terms, while the two series are non-stationary, a linear combination (the cointegrating relationship) itself is stationar. The beauty here is that we do not have to understand the entire (arguably hypercomplex) data generating process, in order to establish a relationship between the two series. You can also think of it as a very elaborate, yet correct, method of establishing correlations.

    An established cointegrating relationship between, say, CO2 forcings and temperatures, would, in my eyes, be the first step in validating the man-made global warming hypothesis. However, the data seem to disagree.

    In our sample, we observe temperatures and solar irradiance as I(1). This implies that, as far as we can ‘see’, these series are behaving as random walks, and their first differences are stationary. Greenhouse gas forcings, on the other hand, are I(2), so only their second differences are stationary (first differences are also a random walk, observationally speaking). The issue here is that this results in these two series being asymptotically independent, i.e. they can never be (linearly) cointegrated.

    Kaufman et al (2006) for example, simply ‘ignores’ this issue (although he does note it, strangely enough), claims that the test must be wrong and not the model, and then goes on to cointegrate them anyway. This is wrong.

    What Beenstock and Reingewertz do is much more sophisticated. They allow for higher order (non-linear if you will, but I’m being a bit sloppy here) cointegration, also called polynomial cointegration, between the I(2) greenhouse forcings and I(1) temperatures. However, once all the test procedures are properly applied, they clearly reject any long term relationship between these series. They furthermore find that solar irradiance is by far the biggest determinant of temperatures (very clear cointegrating relationship) while a permanent CO2 increase only has a temporary effect.

    To be honest, considering possible negative feedback mechanisms which could hypothetically deal with CO2 warming, I don’t think the result is that outlandish.

    For me, the lack of (long-term) correlation, is truly a huge red flag. If we cannot match the hypothesized model with what we observe, then what does it stand on? Again, we are dealing with a phenomenological model, not a fundamental one. The model is therefore not set in stone.

    Now, I understand that the ‘reflex’ in this case is not to trust the statistics. My question then is: what should I trust?

    In economic theory there are (were) plenty of models that seemed logical, coherent and broadly in line with both observations and established micro-results (this echoes the ‘but the effect is physical and the Earth is warming!’…), but that simply failed the empirical tests. Note that many of these models performed well in simulations (i.e. they were able to ‘generate’ real world results), however, when put to a proper and rigorous empirical test, they were rejected.

    I combed through the latest IPCC report, and all I saw were simulations, simulations and more simulations (i.e. Chapter 8), and no proper empirical testing. That these simulations are able to ‘mimic’ real world processes is then taken as proof, which again, I find very awkward (i.e. even managing to correlate, proves nothing of the underlying causal relationship, it is a necessary, rather than sufficient condition, for validation).

    You also state that “Models that use the best physics today can replicate this warming only by using increased GHG forcings. In fact, without GHG forcing we would be cooling.”. With this too, I’m skeptical. We have a rising trend in temperatures, and we have a rising trend in GHG forcings. Naturally, if our model is unable, due to its own defects, to account for the recent warming, we could ‘plug’ this hole, by inserting the rising trend in GHG forcings. Without being snide myself here, I have to say, that failure to model global temperatures properly doesn’t impress me, and certainly doesn’t constitute a proof.

    I think that judging by what I posted here, you can understand why I’m skeptical. I simply have seen no proper verification of the hypothesis. I also find it very troubling that nobody in the climate science community is truly addressing this question, and that whenever it is brough up, the results are dismissed immediatly with “Hey, the effect is physical, so it MUST be there, the statistics are wrong..”

    At the same time, in statistical papers, like for example Kaufmann’s work, time and time again, the hypothesis trumps the statistical test (i.e. the test is rejected rather than the hypothesis).

    To me, this is the world upside down.

    NB. I might sound a bit harsh on Kaufmann here, but that’s just technical disagreement speak. I don’t think he’s a bad statistician, quite the contrary (even though that error in Kaufmann et al (2006), exposed by Beenstock and Reigenwertz, was a bit lame, and should have been picked up by a reviewer).

    However, I do think that his ‘belief’ in the model he’s testing is obscuring his objectivity, and making him too tolerant to rejection.

  16. Bart Says:

    VS,

    You wrote: “In the context of ‘changes’, tossing (H,H,H,H,H,H,H,H,H,H,H,H,H,H) is just as likely as tossing (H,T,H,T,T,T,H,H,T,T,H,H,T,T) or any other sequence of realizations”

    That’s exactly the kind of argument that I responded to in my previous reply: It’s about comparing the sum total. If you replace the H and T by 0 and 1, I’m sure you’ll agree that a sum total of 14 is much less likely than a sum total of 7 (because the latter has many more individual realizations leading to its value of the total). You do seem to be applying the wrong kind of statistical test for the problem at hand.

    Moreover, and this point was also made in the RealClimate link I provided above, temperature is a physical variable which is bounded (by the laws of physics), while a random walk is necessarily unbounded.

    If you’re so sure that there is no meaningful trend, and that the observed variations are just random (notwithstanding the fact that all yearly temperatures of the past 30 years are higher than any of the yearly average temperatures between 1880 and 1910), perhaps you wouldn’t mind betting on future temperature change?

    Based on your random walk argument, I assume you’d accept 2:1 odds of the globe continuing to warm (i.e. you 2x if you win (no change or cooling); I win x if I win (ie warming). Perhaps you’d even take this bet for the next 5 years temperature average compared to the 5 year average from, say 1901-1905, since it’s all a random walk anyway?

    The paper commented on on the PhysOrg site is here:

    Click to access zorita08grl.pdf

    “How unusual is the recent series of warm years?”

    I think you’re wrong in your description of climate models; to a large degree they are based on fundamental physics, such as radiativetransfer. Check the model FAQ’s on RealClimate for a start.

  17. VS Says:

    Hi Bart,

    I went over this ‘random walk’ thing in detail above, please do read (I spent a lot of time typing, if only for that :). I’m also most definitely not applying the ‘wrong kind of test’. Take a look at the references in my first post and check the methodology, this is the test for the job.

    Also, excuse the authority fallacy (and ensuing ridicule ;), but I’ll trust two Economics Nobel Prizes with my statistics, over some quote coming from a journal who’s editor is so sloppy in statistics that he write things like these in interviews with the BBC:

    “BBC – Do you agree that from 1995 to the present there has been no statistically-significant global warming

    Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”

    Not significant at a 95% significance level? Wow, that’s really not significant… it’s significantly insignificant even ;) (leave the ‘warming’, leave the discussion we had above, I’m simply showing how sloppy he is with statistics)

    As for temperature being bounded, sure, but in our sample of observations, it is classified as a random walk via statistical testing, hence, it should be treated as such (again, statistics works with what we observe, not what we think we ‘should’ observe). Any inference ignoring these test results is spurious.

    As for climate models, here’s a definition from wikipedia of the word phenomenological as related to science:

    “The term phenomenology in science is used to describe a body of knowledge which relates empirical observations of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory.”

    So the climate models are phenomenological.

    As far as I know, even the greenhouse effect conjecture is not derived directly from fundamental theory…

    ..but I’ll have to leave that discussion to my physicist friends, as my theoretical physics is not… eh… good enough to debate the laws of thermodynamics :)

    PS. As for that bet, with those odds, I might even take you up on it… :D

  18. VS Says:

    PPS. I’ll look at this paper you posted over the weekend. It doesn’t look good though:

    “Different statistical tests of the stationarity of the global mean temperature have yielded conflicting results [Stern and Kauffman, 2000].”

    Error in citation.. it’s not conflicting at all (most authors conclude I(1)). Also, see the tests/references I posted above.

    ..they then assume then that it is I(0) with a high persistence… It looks very fishy, and way too short for what they are trying to do.

    I’m also not impressed by the bibliography, which includes two references to statistics papers (where they simply apply the method blindly) and for the rest only climatological stuff (while what they are trying to do, is some kind of econometrical analysis.. again, ignoring an enormous body of literature)

    Anyhow, I’ll get back to you on this.

  19. Scott Mandia Says:

    VS:

    Thank you for the “English” to explain the stats. Now I can see the reasons behind your skepticism, although I am still skeptical.

    It has been my understanding that solar variance has been relatively small since the late 1800s with the IPCC estimating 0.1C of the 0.8C warming due to the sun.

    Have you see the following:

    Feulner, G., and Rahmstorf, S. (2010). On the effect of a new grand minimum of solar activity on the future climate on earth, Geophysical Research Letters, in press.

    As discussed at this post over at Skeptical Science, the authors conclude:

    For both the A1B and A2 emission scenario, the effect of a Maunder Minimum on global temperature is minimal. The TSI reconstruction with lesser variation shows a decrease in global temperature of around 0.09oC while the stronger variation in solar forcing shows a difference of around 0.3oC. Compare this to global warming between 3.7oC (A1B scenario) to 4.5oC (A2 scenario). Considering the less variable solar reconstruction shows such strong agreement with past temperature, the authors conclude the most likely impact of a Maunder Minimum by 2100 would be a decrease in global temperature of 0.1oC . With all uncertainties taken into account, the estimated maximum decrease in global temperature is 0.3oC.

    How are the oceans getting their increased heat? I am simplifying but it is either due to a greater source of incoming solar radiation or a decrease in outgoing LW radiation. The sun does not appear to be responsible especially in the past decade which was observed to have a very low TSI but with a record warm climate. Of course, I do know that ocean heat release takes many years so there is a lag, but the heat content is increasing in the oceans so it cannot be blamed on lag.

    What about stratospheric cooling? Even accounting for ozone loss, there should not have been as much cooling if the sun were causing the warming.

    So I see multiple lines of evidence for GHG forcing and no alternative explanation. I wonder if, as Bart suggests, there are some underlying incorrent asumptions built into your analysis. Unfortunately, I am not equipped to figure this out so I will defer to authority.

    I do appreciate your time and now I will seek help. Isn’t that what we all should do when there is a question asked that one cannot answer? :)

  20. Tim Curtin Says:

    Scott: We’ve had brilliant stuff from VS here, plus his cites to Beenstock & co-author 2010. Kaufmann et al are dangerous. Your own ref. to Feulner & Rahmstorf 2010 gives the game away. ‘Global’ temperatures are a compilation from often exiguous surface records at many specific locations. While because atmospheric CO2 is “well-mixed” so for all practical purposes the same worldwide, the same is not true of TSI (total solar irradiance) which is measured at the TOA (top of the atmosphere), but is far from being the same at surface level everywhere, otherwise Khartoum now would have the same temperature as Stockholm (minus 5oC max yesterday) – or vice versa! Kaufmann et al, who include David Stern at “my” own ANU, have never recognized the difference between TSI and surface solar radiation (SSR). That is a further reason (to those advanced by VS) why their papers are unsound. My own regressions of dT/dt = f(RF, dSSR/dt) for locations across the USA show zero stat. sig. coefficients for RF of GHG, and highly sig. for SSR and other variables like RH (relative humidity).

    One major problem with both F&R as well as K. et al is that they fail to adjust the IPCC’s emissions scenarios for the associated variability of uptakes of atmospheric CO2 by the world’s oceanic and terrestrial Biota; as I have shown (Curtin 2009 at my website), along with Knorr (2009), the more the emissions, the greater the biotic uptake, pace IPCC.

    Knorr and I both show how the biotic uptakes have absorbed 57% of total anthropogenic emissions since 1958. As a result of that, the average rate of growth of the atmospheric concentration of CO2 (aka [CO2]) has been 0.41% p.a. since 1958. But that has not stopped the IPCC (Solomon et al 2007), Solomon et al again in PNAS 2009, and Kaufmann et al (2006) from assuming that [CO2] will grow at 1% p.a. from now so that doubling will occur by 2080, exciting, instead of 2161 or later, very boring.

    Email me at tcurtin@bigblue.net.au for my results for Point Barrow (Alaska) or any other major centre across the USA, from Mauna Loa to San Juan.

  21. Alan Says:

    So using statistics to try to isolate a trend indicating AGW is fundamentally flawed because the data over 150 years cannot be considered to represent other than that of stochastic processes.

    Is that the argument between VS and Bart et al? I’m guessing from my stats-challenged existence.

    Maybe the problem is that the AGW discussion and search for ‘proof’ has gone not just into the individual trees of the forest, but into the leaves of an individual branch on the individual tree.

    Call me old-fashioned, but the AGW argument is not about statistics – it’s about physics, isn’t it?

    If I was sitting on an individual atom in a sealed glass beaker, observing the movements of all the other atoms (and mine), I would go “wow this is so stochastic!”. And if my task was to calculate what my atom’s energy state would be in the future I just couldn’t say … I can’t observe any trends.

    If I was sitting on a chair and this glass beaker was on the bench and I was shining a heat source at it … I reckon I could use basic physics to predict the average temperature in the beaker after 5/10/etc minutes. Individual atoms don’t matter. Simple physics can be applied. Stochastic micro-processes don’t matter.

    Let’s translate to the globe …

    If was sitting on a chair in a park reading this blog on my laptop, and observing my micro-climate, I would go ‘wow this is so stochastic” … the wind gusts, the clouds move across the sun etc. If my task was to calculate what my micro-climate would be in the future I just couldn’t say … I can’t observe any trends.

    If I was sitting on the moon looking down on the Earth, I would observe a ball with observable properties, fossil fuel burning behaviour, an atmosphere and a sun … I reckon the forecasting of Earth average temperatures boils down to physics. To be more accurate in the shorter term (decades) I would throw in probablistic events like volcanos etc.

    On the global scale, the apparent stochastic behaviour of trillions of micro-climate volumes over tiny time periods (decades) does’t hide the physics.

    But approaching the question of discernable temperature anomalies and trends and correlations with human behaviour with curve fitting … and then to bog down in arguments about whether it is statistically valid to do so … does take the eye off physics arguments and is just sooo missing the point.

    Why is the AGW discussion getting drawn into this at all?

    I don’t get it.

  22. Bart Says:

    Alan,

    I think your analogies are spot on.

    VS,

    Climate models incorporate a lot of fundamental physics incorporated. Read eg here for a start. If you want to understand climate models, I recommend asking a climate modeler (eg at RC) rather than a phsycist of another branch.

    Your specific issues with the paper by Zorita, Von Storch and Stocker could best be taken up at the blog of the first two.

    And as Scott pointed out as well, there are many changes in all parts of the climate system (air temp, ocean het, arctic ice, ice sheets, glaciers, ecosystems). On top of that, measurements corroborate that less IR radiation is leaving the eart system now than a few decades ago: The enhanced greenhouse effect at work, and a clear sign of a forcing acting on the climate (ie it’s not purely stochastic, but we know that from physics already). This all is not all merely a coincidence or a ‘random walk’. You’d probably claim that the change from an ice age to an interglacial would be a random walk.

    Check out this video for example or these posts about the many lines of evidence for anthopogenic forcing of the climate.

  23. Alan Says:

    Thanks Bart!

    Those analogies were inspired by a mosquito … it was buzzing around my head and, try as I might, couldn’t swat it … it’s movements were way too stochastic.

    So I shut the door of the office and unloaded the insect spray around the room … the sucker is dead!

    A benefit of graduating as an engineer … our motto is “if at first you don’t succeed, hit it with a bloody great hammer!”

    A

  24. Eli Rabett Says:

    Beenstock and Reingewertz are using a too short forcing record which biases their test for a hinged forcing (two straight lines). If you look at the NOAA AGGI, which is the best record of forcings since 1979, it is clearly linear, and the IPCC discussion shows historical forcings are pretty clearly hinged.

    This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves.

  25. Heiko Gerhauser Says:

    Hi Bart and VS,

    I think I’ve found an easy way to illustrate the statistics issue, so that lay people can get it. Say look at nails coming of a nail making machine and daily readings for a drinking water reservoir. The nails are independent of each other, the reservoir readings are not.

    So, in Excel you could model the nails by a series that adds a random figure to a fixed length. For the reservoir, you might wish to add a random figure not to a fixed quantity of water in the reservoir, but rather to the previous day’s value.

    Excel has a nice function for this RAND. It returns a random number between 0 and 1.

    Now, eye balling the yearly temperatures, they tend to vary by a few one hundreths of a degree per year.

    So, I put in a hundred random values between -0.03 and + 0.03, that is RAND()/100-0.03

    This series has no underlying trend up or down. Yet it produces graphs that meander up or down quite a bit (I’ve put an example on my blog). In fact the way the graphs meander up or down looks quite a bit like the actual temperature data for the last century.

    —————–

    As said elsewhere in this thread, while the statistics are interesting, they also don’t say a great deal on their own. You can’t just analyse the numbers not knowing whether it’s nails or reservoir levels or global temperatures. You need to understand the underlying mechanisms.

    In addition, I must say that I think the temperature range given by the IPCC (0.7 +/-0.2C) is too narrow. I am dubious that we know the difference between 1995 and 2007 to better than +/- 0.1C and the difference between 2000 and 1850 to better than +/- 0.5C.

  26. Heiko Gerhauser Says:

    Hi Bart,

    if temperatures were up by 5C over the last 100 years and had not varied by more than +/- 1C over the last 1000 and not by more than +/- 2C over the last 10000, that would be pretty good evidence on its own that something to do with industrialisation might be the explanation for the rise.

    The actual picture as I see it is that temperature is up by 0.7C +/- 0.5C over the last 150 years and we don’t know how flat temperatures were in the last 1000 years, but anything between 1.5C colder and slightly warmer than at present looks consistent with the evidence we’ve got.

    Also, on the forcing and response side things aren’t that clear cut thanks to aerosols and uncertainty about feedbacks. It boils down to net forcing should be something like 0.1 to 1.5C. And even that presumes that the climate sensitivity is linear line is correct and that clouds don’t act as a thermostat with tipping points from one stable state to the next.

    So, I would say that anthropogenic signal is just about barely visible in the temperature data, for experts with a great deal of understanding how the system should work. It is far from the type of hockey stick graph lay people could or should look at and come to a good conclusion independent of a good understanding of the climate system.

    Let me add this on the random walk thing. I do think that it is not appropriate to assume years are independent from each other. Clearly, the oceans store heat, so if due to random variations in cloud cover, one year is warm, then that’ll have an impact on the next year too. I am not sure how best to translate that into a statistical analysis that does make sense. But I am pretty sure that the confidence intervals and trend calculations of yours, which do in fact assume complete independence of years from each other with the random component for each year having nothing to do with the random component of the previous year, are not the way to go.

  27. VS Says:

    Eli Rabett,

    How interesting to see you here.

    You write:

    “Beenstock and Reingewertz are using a too short forcing record which biases their test for a hinged forcing (two straight lines). If you look at the NOAA AGGI, which is the best record of forcings since 1979, it is clearly linear, and the IPCC discussion shows historical forcings are pretty clearly hinged.

    This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves.”

    Oh really?

    On your ‘science’ blog, to which you link, you write:

    “Straightforwardly this is a claim that forcing has been increasing as a second order function, while temperature has only been increasing linearly. Given the noise in the temperature record, that is a reach as an absolute, but Eli is a nice Rabett.”

    Wrong. That’s not at all what they ‘straightforwardly claim’. An I(1) series doesn’t ‘increase linearly’ and an I(2) series doesn’t ‘increase as a second order function’, and I have no idea where you managed to get that from.

    Read the paper, read up on the methods (e.g. order of integration, cointegration, again, the Nobel prize website has a few nice Nobel lectures on it), and then “butt in”.

    It’s all explained a few posts back: the order of integration refers to the number of times you have to difference a series to obtain a stationarity (and nothing to do with the order of the ‘polynomial’ shaping the curve). The order of integration of these series is furthermore determined via statistical testing (not ‘assumption’), and the BR findings are confirmed across all papers I read on the topic (see references in my first post).

    You propose then, that there is a structural break in there somewhere (by ‘eyeballing’ the data), and write:

    “The bunnies tossed back a few beers, took out the ruler and said, hey, that total forcing looks a lot more like two straight lines with a hinge than a second order curve, and indeed, to be fair, the same thought had occurred to B&R”

    ”BUT, the period they looked at was 1880 – 2000. Zeroth order dicking around says that any such test between a second order dependence and two hinged lines is going to be affected strongly by the length of the record. Any bunnies wanna bet what happens if you use a longer record???”

    Oh why don’t you try me mr. Rabbet. Start by telling me what exactly happens to the distribution of the Augmented Dickey Fuller test statistic once the number of observations is expanded. Be sure to use formal notation in your reply, and not ‘bunny wabbit has a hunch’.

    Also, I emailed Beenstock about the data he used when I first read the paper (outside of climate science, that’s quite normal), and he wrote me that all the data they use come from GISS-NASA. Are you suggesting NASA supplied the wrong data? Wasn’t this the data you recommended they use?

    So you say: “This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves”

    Indeed, that somebody makes an enormous fool out of themselves.

    Actually, I have to admit I also did some background research on you Eli, and it turns out that this is not the first time you wade into a topic you have no clue about, and smear / insult the scientists in question because you disagree with their conclusions. Just like here you pretend to understand something about statistics in order to ‘debunk’ BR, you pretended to be an expert on fundamental physics, in order ‘debunk’ the following publication on your anti-scientific smear blog:

    “Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics
    Authors: Gerhard Gerlich, Ralf D. Tscheuschner”

    http://arxiv.org/abs/0707.1161

    And Gerlich and Tscheschner had the following to say about you on this New York Times blog (comments section).

    “First, let us start with discussing the identity of Eli Rabett. We have been informed that Eli Rabett is the pseudonym of Josh Halpern, a chemistry professor at Howard University. He is a laser spectroscopist with no formal training in climatology and theoretical physics.

    On 2007-11-14 one of us (RDT) sent Josh Halpern the following E-Mail:

    QUOTE:

    Josh Halpern alias Eli Rabbett –
    [If you are not Josh Halpern, then forgive me and delete this message immediately.]

    Apparently, believing to be protected by anonymity you (and others) want to establish a quality of a scientific discussion that is based on offenses and arrogance rather than on critical rationalism and exchange of arguments. Scientist cannot tolerate and endorse what is becoming a quality in weblogs and what is pioneered by IPCC-conformal virtual climate bloggers.

    I must urge you to reconsider.

    My questions to you:

    1. What is the most general formulation of the second law of thermodynamics?

    2. What is your favorite exact definition of the atmospheric greenhouse effect within the frame of physics?

    3. Could you provide me a literature reference of a rigorous derivation of this effect?

    4. How do you compute the supposed atmospheric greenhouse effect (the supposed warming effect, not simply the absorption) from given reflection, absorption, emission spectra of a gas mixture, well-formulated magnetohydrodynamics, and unknown dynamical interface and other boundary conditions?

    5. Do you really believe, that you can transform an unphysical myth into a physical truth on such a low level of argumentation?

    END-OF-QUOTE

    We did not get any response.”

    The whole answer by GT can be read here, comment 974. It’s worth the read.

    http://dotearth.blogs.nytimes.com/2008/01/24/earth-scientists-express-rising-concern-over-warming/?apage=39#comments

    This type of garbage uttered by individuals like you is exactly why this debate is so poisoned. I mean, seriously, this is part of the ‘denialosphere’ and BR are engaged in a ‘circle jerk‘?

    You Dr. Halpern, are a disgrace to your institution and a disgrace to science. I seriously considering compiling all this I have on you and submitting it to some ethics commission at Howard University.

    PS Alan, Haiko and Bart, I’ll get back to the issues you posed (I’m very busy right now, but I couldn’t let Eli’s gibberish just sit there, unchallenged). Also Tim, that’s very interesting, I will certainly read your paper, but beware, I might email you about some of the data ;)

  28. VS Says:

    PPS. I meant ‘Heiko’, of course, my apologies :)

  29. Scott Mandia Says:

    VS,

    Are you endorsing the Gerlich & Tscheuschner paper?

  30. VS Says:

    Hi Scott,

    I don’t have theoretical knowledge to either endorse or dispute that paper (I mentioned it in passing in an earlier post, but I also stated that I’m unqualified to pass judgement).

    Some of my theoretical physicist friends though, who’s nuanced judgement on these matters I sincerely trust, have endorsed it. The most critical one of them still argued that at the very least, they raise extremely interesting points.

    Whether GT are right or not however, is besides the point here. The problem is the anti-scientific attitude, based on insults and baseless claims, encouraged by agitators like Halpern (aka Eli Rabbett).

    Now that’s something I don’t endorse.

  31. Marco Says:

    Good grief, VS, you now repeat Gerlich’s nonsense about Josh Halpern?

    Gerlich apparently thinks that he, as a theoretical physicist (*), knows better than LOADS of physicist (which includes Josh Halpern) about thermodynamics. It is more than likely that Gerlich (and Tscheuschner) never ever work with thermodynamics in their field. And yet they scold the likes of Arthur Smith (notably a theoretical physicist) or Ray Pierrehumbert, who have tried to set them straight on multiple occasions.

    (*) A theoretical physicist with REALLY low impact, it must be said. He might want to put more effort in actually publishing something worthwhile.

    You should also read their paper. It includes loads of odd references, and open attacks on various scientists and scientific bodies (Hans von Storch they don’t like too much, either). Enjoy yourself just looking at the references and their polemic. Then come back to us about how trustworthy G&T are *as scientists*. Forget all about the topic of the paper, the writing explains it all.

  32. VS Says:

    Hi Marco,

    As you mention Smith, this might be of interest to you

    Click to access 0904.2767.pdf

    Anyhow, I don’t want to get dragged into a GT discussion, because, as I stated, I don’t have the qualifications for it. Judging by the complexity of the matter, I doubt that anybody participating in this discussion here, has that knowledge either.

    So I’ll leave it there.

    The nonsense Halpern just posted here (and on his own blog) however, where he couldn’t even get the definition of an integrated series right, I am qualified to judge.

  33. Heiko Gerhauser Says:

    Hi VS and Bart,

    I am no statistician though I like to dabble with the Excel RAND() function. If VS is saying what I think he is, namely that it’s not possible to see a clear trend in the data, if the random yearly addition/subtraction is cumulative, that makes eminent sense to me. And whatever Eli is on about in his post, it’s not about that.

    However, while I think the statistics are entertaining and interesting, and maybe my choice of the word “torturing” wasn’t quite rite, I do wonder how the statistics relates to statements like “In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.”

    “Catastrophic” is poorly defined. Say it means 5C by 2100 then we need a clear trend break out upwards anyway. And that’s also what the modellers are saying. Much of the warming is masked by aerosols, and we presume that they won’t keep on increasing. But “catastrophic” could also mean 0C increase, and India turning into a desert with storm damage in the US tripling. How much and more importantly how directly do the statistics calcs really matter in that context? I think you are right on the statistics question, I am rather dubious whether this rightness means we should do less or should do more about climate change.

  34. Bart Says:

    From the comments section at http://tamino.wordpress.com/2010/03/05/message-to-anthony-watts/:

    “The farther away the actual temperature gets from the equilibrium temperature, the faster the system will attempt to regain equilibrium.”

    This is bounded by physics: Temperatures continuing to wander off in one direction without a change in forcing would cause an energy imbalance, which would force the temperatures back to where they came from: equilibration. In general, long term changes in global avg temp are the consequence of a non-zero radiative forcing, whereas temp juggle up and down without a clear trend if there is no radiative forcing acting upon the system.

    The random walk argument “is the same mistake Pat Frank made in his ridiculous Skeptic magazine article purporting to show that the uncertainty in future temperatures grows without bound if you propagate uncertainty over time, leading to the absurd conclusion that the surface temperature of the Earth in 2100 is uncertain within hundreds of degrees. (A little basic common sense should have told him that there are basic physical reasons to expect that the climate is not going to be hundreds of degrees hotter or colder within a century’s time, and thus there might be something wrong with the way he was propagating uncertainty.)”

    The variation around the linear trend in global avg temp exhibits autocorrelation, which makes the estimation of the trend and esp the errors of the trend more tricky, but it doesn’t make an OLD trend useless in order to visualize what’s happening.

    See these posts that explain more about trend analyses of temp data and the nature of the ‘noise’:
    http://tamino.wordpress.com/2009/12/15/how-long/

    http://tamino.wordpress.com/2008/08/04/to-ar1-or-not-to-ar1/

    “Most regular readers here are familiar with autocorrelation in time series data. It’s the property that for the random part of the data (the noise rather than the signal), the values are not independent. Instead, nearby (in time) values tend to be correlated with each other. In almost all cases the autocorrelation at small lags (very nearby times) is positive, so if a given random term is positive (negative), especially if it’s large, then the next term is likely to be positive (negative) as well. For global average temperature, the random part of the fluctuations definitely shows autocorrelation. This makes estimates of trend rates from linear regression (or any other method, for that matter) less precise; the probable error from such an analysis is larger than it would be if the random parts of the data were uncorrelated. In fact, a great many time series in geophysics exhibit autocorrelation, which makes the results of trend analysis less precise, sometimes greatly so.”

  35. VS Says:

    Bart, I really don’t see your point. I don’t think you understand what an integrated series is.

    Let: Y(t)=rho*Y(t-1)+error(t)

    This is the standard expression for an AR(1), or first order autoregressive, time series process.

    An autocorrelated process, is one with an rho unequal to zero. The higher persistence, the harder inference is. And when the persitence is perfect, i.e. rho=1, we are talking about a first order integrated series, or I(1), which is no longer stationary (so no standard inference). You have to difference it to obtain a stationary series, namely (if rho=1):

    D_Y(t)=error(t)

    The series is also said to contain a unit root. Calculating linear, or quadratic, or whatever trends in this context is spurious. Your confidence intervals are furthermore meaningless.

    Cointegration, with other integrated variables, is a (the) method for multivariate statistical inference when dealing with series containing unit roots. Hence the Nobel prizes.

  36. Heiko Gerhauser Says:

    Hi Bart,

    a system that actually is in equilibrium won’t move away from it. A closed vessel at constant temperature everywhere will not spontaneously develop temperature differences, not at the 1 mm, 0.001C, 1 s type level. One key problem I have with the argument is therefore that I don’t understand why the Earth should get warmer/colder even over a few months, unless there is a “forcing”, ie something that does in fact force the system away from equilibrium.

    Tamino doesn’t go into what causes the noise, which after all should not be there for a macroscopic system in equilibrium. Now there is an obvious reason we don’t expect the noise to random walk us to +20 or to -20C over a hundred years, namely that sort of temperature history is quite inconsistent with proxy indicators of past behaviour. It’s rather less clear that whatever causes the annual variations in temperature cannot random walk us 0.5C over a century.

    I am not a statistician, unlike apparently VS. I don’t know the exact meaning of the term autocorrelation. But I am pretty sure that unless we get away from the statistics to the underlying mechanisms, it’s bound to lead back to the fact that you can’t distinguish readily between a stackable random component and a trend of 0.5C over a century.

  37. Marco Says:

    VS:
    Ah yes, Gerhard Kramm. As bad as Gerlich, desperately trying not to get into the 2nd law of thermodynamics kerfuffle, and making several mistakes himself. Once again Ray Pierrehumbert tried to educate these people (being an atmospheric physicist himself), but failing due to the inability of Kramm (in particular) to learn.

  38. Bart Says:

    Heiko,

    I wholeheartedly agree that understanding the underlying physical mechanisms is key, and this seems to be missed by VS. I think the physics of the problem bounds the temperature to such a degree that it is not properly characterized as a stackable random component, though I’m not a statistician by any means.

  39. VS Says:

    Heiko you wrote:

    “I do wonder how the statistics relates to statements like “In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.””

    You also raised some interesting points, such as the difficulty of detecting a man-made global warming signal in the data, and I’ll try to get back to you a bit later on that.

    Alan you wrote:

    “Call me old-fashioned, but the AGW argument is not about statistics – it’s about physics, isn’t it?” and “Why is the AGW discussion getting drawn into this at all? I don’t get it.”

    Bart, you agreed with what Alen wrote.

    Alright, allow me to elaborate on why statistics is relevant in this case. Let me start by stating that every, and with that I mean every, validated physical model conforms to observations. This is the basic tenant of positivist science. However, usually within the natural sciences, you can experiment and therefore have access to experimental data. The statistics you then need to use are of the high-school level (i.e. trivial), because you have access to a control group/observations (i.e. it boils down to t-testing the difference in means, for example).

    In climate science, you are dealing with non-experimatal observations, namely the realization of our temperature/forcing/irradiance record. In this case, the demand that the model/hypothesis conforms with the observations doesn’t simply dissappear (if it is to be considered scientific). It is made quite complicated though, because you need to use sophisticated statistical methods in order to establish your correlation.

    So correlation is, and always will be, a necessary condition for validation (i.e. establishing causality) within the natural sciences. If you don’t agree with me here, I kindly ask you to point me to only one widely used physical model, for which no correlation using data, be that experimental or non-experimental, has been established. Do take care to understand the word ‘correlation’ in the most general manner.

    Now, I’ve tried to elaborate this need in my previous posts, but I fear that we might be methodologically too far apart for this to be clear, so allow me to try to turn the question around.

    Let’s say that you have just developed a new hypothesis on the workings of our atmosphere. You read up on the fundamental results regarding all the greenhouse gasses, and the effects of solar irradiance on them. You also took great care to incorporate the role of oceans and ice-sheets etc into your hypothesis (etc. etc. i.e. you did a good job).

    Put shortly, you developed a hypothesis about the workigns (or causal relations within) a very complex and chaotic system on the basis of fundamental physical results.

    Now, guys, tell me how you think this hypothesis should be validated? Surely it is not correct ‘automatically’, simply because you used fundamental results to come up with that hypothesis? There must be some checking up with observation, no?

  40. VS Says:

    Bart you wrote:

    “I wholeheartedly agree that understanding the underlying physical mechanisms is key, and this seems to be missed by VS. I think the physics of the problem bounds the temperature to such a degree that it is not properly characterized as a stackable random component, though I’m not a statistician by any means.”

    Understanding the underlying physical mechanism is indeed the key, I never disputed that. What I’m talking about here is the validation process (see previous post).

    I’ll try to elaborate on this tonight. My point in short is, that in our subsample of observations, on which we perform our statistical analysis (i.e. the past 150 years or so), temperature is in fact unbounded and can (‘must’ even) comfortably be described as an I(1) process.

    Again, I’ll try to find some time tonight to type that out. Can you in the meantime take a careful look at the definition of integrated series, cointegration and related statistical tests? :) You might not be a statistician, but this stuff is not that complicated to understand, and I feel you ought to be interested… ;)

  41. Tim Curtin Says:

    VS – once again I am most impressed by your contributions to this above-average thread. What is amazing is that even now Judith Lean can write a whole paper funded by NASA (Jan-Feb 2010, wires.wiley.com/climatechange) that ignores the whole issue of cointegration in reaching its conclusion that “most (90%) of industrial warming (sic) is due to “anthropogenic effects rather than to the Sun”.

    Here are my latest regression results, for July values of stated variables at Point Barrow from 1960 to 2006.
    Variable Coefficients Standard Error t Stat p

    RF 0.000785569 0.045665476 0.017202679 0.986358368
    dAVGLO 0.000693091 0.000444636 1.558782024 0.126734481
    dH2O 7.216078519 1.267334653 5.693901371 1.17822E-06
    dRH -0.16626401 0.072127426 -2.305142712 0.026294587
    dAVWS -0.419315814 0.331092703 -1.266460449 0.212496345
    Only dAVGLO, H2O, and RH are stat. sig. (adj. R2=0.44). What happened to Lean’s 90% for RF? Changing the RF values from absolutes to first differences actually turns the coefficient on RF to negative! I confess these are probably naive results, but it will take a lot to get the RF up to significant. These results are little diffferent from those I have for January 1960-2006 at Pt Barrow, although dAVGLO fades (and dAVWS becomes stat. sig.), because it is virtually total darkness there at all times in December and January.
    Notes: RF is radiative forcing of CO2, the others are first differences of AVGLO(= total horizontal surface solar radiation), H2O = precipitable water in cm, RH= relative huidity, and AVWS = average windspeed.

    So back to school for Judith Lean, VS are you available to coach her?

  42. Bart Says:

    VS,
    “There must be some checking up with observation, no?”

    Of courses. See eg these graphs of comparing model (ie hypothesis)results with observations: http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-9-5.jpg
    Top panel is with including all radiative forcings (natural and man made); bottom panel is with including natural forcings only (notably solar and volcanic on this timescale).

    In both cases it’s obvious that global avg temp is bounded; in the absence of a net forcing, it doesn’t keep wondering off in one direction.

    You’re very generous in giving advice, but seem less keen on taking advice from others. Have you read up on what climate models actually do (I provided some links in previous comments)?

  43. Bart Says:

    Tim Curtin,

    It appears that you only tested for the response to CO2. However, there are more forcings than just CO2. Climate responds to the net forcing.

  44. Tim Curtin Says:

    Apologies, I forgot to say that those regression results for July at Point Barrow were for changes year on year in max temps. The results for dTmin are very similar, with RF irrelevant to the actual record from 1960 to 2006.

    Bart: I have just checked AR4 Fig.9.5. The “natural forcings” are it appears mainly of TSI, which is irrelevant to global mean temperatures as derived from measurements at various surface locations with different latitudes and surface SR. Does this Chapter 9 of WG1 AR4 ever mention cointegration? I think not. Does it ever display any regression analysis results? NEVER. Moreover the text (p.684) states that Fig. 9.5 was actually derived from model simulations, and NOT observations as claimed in the caption and by you.

  45. Tim Curtin Says:

    Reply to Bart saying: “Tim Curtin, It appears that you only tested for the response to CO2. However, there are more forcings than just CO2. Climate responds to the net forcing”.

    Bart, my regression results clearly included both radiative forcing (RF) from increasing atmospheric CO2 and from direct “horizontal” solar radiation, “AVGLO”, in addition to “H2O” which represents any feedback from water vapour and from RH, relative humidity. What have I left out?
    The RF is always irrelevant – and often negative!

  46. Heiko Gerhauser Says:

    Hi VS and Bart,

    as Bart knows I am not particularly impressed by this comparison between models with anthropogenic forcings and without anthropogenic forcings. The fact that the forcings are rather poorly known kind of gets neglected there. I am also dubious about the claim that people just cannot construct models that reliably hindcast the last 100 years with a bit more variability. I can think of many things that could cause this variability, say aerosol formation over the Atlantic connected to dust storms, sea ice changes due to changing wind patterns. I’ve got the strong suspicion that failing to come up with a model that comes up with the right temperature path ex anthropogenic forcings is largely due to a lack of trying and the limited number of people qualified to write GCM’s. And anyway, what would be the point?

    Let’s postulate here that the data are poor and not yet suitable to do much sensible validation of the models. Don’t you think, VS, that we still need to make some stab at predicting the future? What do you propose given the data aren’t able to tightly constrain the models?

  47. Bart Says:

    Tim Curtin,

    In Fig 9.5 from AR4 (http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-9-5.jpg) the black lines denote observations, whereas the color lines are the model results (biwht the ensamble mean as a tick colored line).

    Other forcing you left out: Notably aerosols, but also the non-CO2 greenhouse gases and volcanoes.

  48. Arthur Smith Says:

    VS – re your reference to Kramm et al’s silly attack article – I wrote here on Kramm’s incapability of understanding very simply explanations:

    http://arthur.shumwaysmith.com/life/content/why_are_some_people_so_easily_confused

    and apparently nothing has changed. You can choose to believe Kramm, or you could actually spend a little bit of time reading my article and thinking about it a bit… Up to you.

  49. Scott A. Mandia Says:

    I have to thank all of you for this excellent discussion. I feel I am learning much. It certainly appears that the human signature in the recent T record is not as clear-cut as I had thought. Having said this, I must quote Nobel Laureate Sherwood Rowland (referring then to ozone depletion):

    “What’s the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?”

    I fear that if we wait long enough for the statistical proof it will be too late to reverse the crash-course I and others are convinced we are on. Here is what I believe are the key points (some of which I have already made):

    1) We know that increasing CO2 forces climate change (warming).
    2) We have a pretty good idea that a doubling of CO2 will result in a 1K forcing.
    3) There is reasonable probability that the resulting feedbacks will produce at least 2K additional warming (lower bound) with 3K more likely.
    4) We are also measuring CO2 increases of about 2 ppm/year and rising (except for 2008 due to decreased industrialization from the global recession).
    5) These increases are primarily from humans.
    6) About half of the increase from pre-IR times has occurred in the past 35 years.
    7) It is likely that today’s CO2 level is unique in the past 15 million years – certainly it is in the last 650,000 years.
    8) The last time that Antarctica and Greenland had no ice (approx. 50 million years ago), CO2 levels were 425 ppm +/- 75 ppm. Today’s values are already within that range.
    9) Sea levels were about 120m higher than today at that time.

    We are increasing CO2 rapidly and I think it is quite unwise to take a wait and see approach, especially in light of the fact that there appears to be no viable alternative explanation for the recent warming.

  50. VS Says:

    Hi Arthur,

    Thanks for responding.

    I actually ‘read’ all three of your pieces, (so GT, you ad K et al), but I simply don’t have the knowledge to properly evaluate them on my own. Hence, I’m keeping an open mind with regards to this until I obtain more observations (talk about professional deformation ;).

    Perhaps I’m making a fallacy of false compromise here, but I cannot imagine GT, or you or, in their turn, K et al, being completely ‘wrong’. Also, given the tone of the debate, as evident on various blogs (and escalated by individuals such as Harpern), it’s hard for a non-physicist to tell who’s closer to the truth. In particular, the more people are resorting to insults and ad hominem’s, the more skeptical I am about anything that comes out of a discussion.

    In any case, don’t you think that more (theoretical) physicists should mingle themselves in the discussion? It seems like quite the significant debate, but it apparently ‘lives’ only in the blogosphere, and it’s been three years already since GT posted their (then) working paper online. I find that strange.

    One final question, and forgive me if I’m sounding ‘smart’ (please assume good faith), but I was wondering about this since the first time I saw your comment (i.e. 2008 comment, not the one here).

    GT claim that there is no rigorous (fundamental) derivation of the atmospheric greenhouse effect present in the literature. Your comment, as far as I gathered, is an answer to that.

    If what they say is correct, shouldn’t you submit it for publication? Again, if they are correct, then your comment should be quite a significant contribution to the literature.

  51. VS Says:

    Hi Bart,

    First of all, I am most definitely reading your links. As a matter of fact, some of them were very informative, and I thank you for the time you took to dig them up. They have however not refuted my main statements.

    Allow me to start with the autocorrelation link you posted here:

    http://go2.wordpress.com/?id=725X1342&site=ourchangingclimate.wordpress.com&url=http%3A%2F%2Ftamino.wordpress.com%2F2008%2F08%2F04%2Fto-ar1-or-not-to-ar1%2F

    This blog entry is clearly written by a non-statistician. In fact, I responded to this already here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1266

    Contrary to what the author of that blog post claims in the comments, here:

    “[Response: The ADF test is really for the presence of a unit root in an autoregressive process, which is rather a different critter. A trend could easily fail significance without having a unit root.

    Econometrics is good stuff, but non-economic statistics is advanced as well. In science, trends are clues to the secrets of the universe — and in my opinion that’s better than money.]”

    the ADF test and autocorrelation are very closely related, and is not ‘a different critter’. As a matter of fact, if you look closely at the equations I posted above, you can see that perfect autocorrelation, which we detect using the ADF test (i.e. the rho=1), is indication of the series containing a unit root. In particular, the H0 (null-hypothesis) of the ADF test is that the series contains a unit root, while the Ha (alternative hypothesis) is that it contains a deterministic trend.

    The test actually distinguishes between the two, that’s why it’s so important.

    Now, Heiko and Bart,

    Heiko, regardless of the fact that you are not a statistician, you are displaying a good deal of proper intuition. In that sense, we are slowly but surely indeed arriving to the crux of the matter. Just like Tim posted above, simply ‘eyeballing’ the two different simulation results doesn’t really prove anything, and definitely doesn’t constitute a formal empirical verification of a hypothesis.

    If you want to use these model outputs for verification, there are some formal demands. We need to see a rigorously derived statistical test comparing the model outputs with the data. This derivation has, at the very least (!), to include the following components:

    1) Distribution of the test statistic under the H0 that the output corresponds to the underlying data generating process (DGP)
    2) The distribution of the test statistic under the Ha, where this alternative hypothesis is that the output doesn’t correspond to the DGP
    3) The derivation of these distributions has to account for the endogeneity of simulation results, namely the effects of using selected empirical (physical) outputs as inputs in the simulation: the issue here being ‘overfitting’. You would be surprised how much you can ‘fit’ without having any clue about the DGP.

    Without these elements, how can we (formally) distinguish between the validity of GCM outputs and any other simulation generated?

    Note that simply comparing the variance fitted (i.e. the equivalent of a R2 statistic) is a big no-no, and will result in spurious inference. You need rigorous testing.

    For example, regressing two, completely unrelated, I(1) series on each other results in an expected value of the R2 statistic of around 0.5. Tim, as you properly pointed out, this is the equivalent of what apparently happened to Judith Lean’s ‘regressions’ (even though I still have to read that paper, I presume you are not just making stuff up when you say she ignores unit roots :)

  52. VS Says:

    Hi Tim,

    Thank you again for your confidence, but there are many (many!), much more skilled, time series specialists that should ‘coach’ Lean :) Bart, the Dutch are quite good at it, perhaps somebody should get, say, prof. dr. de Gooijer, or prof. dr. Boswijk, or any Tinbergen fellow specializing in TSA for that matter, to do it :)

    While I have certainly had formal training in time series analysis, I’m in a different branch.

    What’s so strange about the whole debate however, is that these tenants (which I’m elaborating on here) of modern statistical testing are not at all so ‘arcane’. Cointegration and unit root testing is widely taught, and should be a standard part of the toolkit of anybody wading into the analysis of time series.

    Clearly evident is the fact that this entire field is completely ignored in the debate. A few individuals such as Kaufmann and you are an exception, and whatever the differences in opinion and approach, I think both of you should be lauded for trying to draw attention to it. If there were no publications by Kaufmann, econometricians like B&R wouldn’t have been drawn into the fray. Now, this is progress in science. Mistakes are OK, as long as they can be weeded out, and the debate remains open and civilized.

    I think also that you are making an extremely valid and important point with the distinction between solar irradiance (TSI) and the radiation actually reaching the surface (SSR). Intuitively speaking the two series represent something completely different, and as far as I gathered, a basic condition for the greenhouse effect is that the sunlight actually reaches the ground (so this is the series you are actually interested in, it also helps in bypassing a part of the ‘cloud’ problem).

    By taking the satellite measurements, you in fact are ignoring all the variance displayed on the surface! As I stated earlier, statistical testing deals with explanation of variance. In this sense, you cannot first ‘artificially eliminate’ the variance from a series (so ‘averaging out’ is questionable), and then claim that the variance explains nothing (as many climate scientists do, awkwardly enough).

    As for the regression results you posted, I have a bit of a hard time interpreting them as they are stated. Could you also post your full specification? I presume you also found GHG forcings to be I(2) and temperatures to be I(1). How about the SSR series, also I(1), just as the TSI?

  53. VS Says:

    Correction.

    I wrote: “In particular, the H0 (null-hypothesis) of the ADF test is that the series contains a unit root, while the Ha (alternative hypothesis) is that it contains a deterministic trend.”

    That was sloppy. The Ha of the ADF test, put generally, is that the series is in fact stationary (and in this case ‘could’ contain a deterministic trend).

  54. VS Says:

    Another correction:

    I just saw that I wrote in my first post that the lag selection in my ADF tests was based on Schwartz Information Criterion, or SIC. In fact, it was based on a related measure, the Alkaike Informoation Criterion, or AIC.

    Using the SIC, which leads to no ‘lags’ being used, results in remaining autocorrelation in the errors of the test equation. That’s dangerous for inference.

    In the context of these temperature series, the AIC leads to 3 lags being employed, and successfully eliminates all remaining autocorrelation in the errors of the test equation (which has a deterministic trend as alternative hypothesis).

    Small issue, but I rather set it straight now, before somebody brings it up .

  55. Tim Curtin Says:

    Bart: I must first apologise for misreading the caption to Fig.9.5, but it remains inadequate, as it does not specify how much of the models’ simulations of the observed global temperature record since 1900 incorporates the observed natural and anthropogenic forcings. For it is known that the models’ retrospective simulations include “tuning” to get them closer to observed climate. This feature of the models explains their inability to produce accurate projections when bereft of current parameter values.
    Secondly, reverting to your graphs at the beginning of this thread, you say “Temperatures jiggle up and down, but the overall trend is up: The globe is warming”. But the visual impression that that is the case depends heavily on the long period of negative and zero temperature anomalies for the period 1880-1930, when the instrumental coverage of the world’s surface areas was far from comprehensive. Africa was completely absent until 1910, Central America and SE Asia were little better until well after 1900 (see CDIAC’s or NOAA’s maps of % coverage by decade) and not much better until the 1940s. Your graph implies a range for the anomaly of no less than 1oC from low to high, when from 1940 to after 2000 it is only 0.3oC (and within error range). I note that the anomaly is based on the 1901-2000 record as baseline, but when the first thirty years of that period were notable for sparse global instrumental coverage, your comments and the graph are very misleading.

    Thirdly, in regard to my own regressions for data at Point Barrow, you say: “Other forcing you left out: Notably aerosols, but also the non-CO2 greenhouse gases and volcanoes.” Actually the aerosols come in through the NOAA’s variables “ TOT, OPQ, H2O, TAU – Average TOTal and OPaQue sky cover (tenths), precipitable water (cm), and aerosol optical depth (unitless)” Apart from “H2O” none of these proved to be stat.sig. at Pt Barrow except, just for TAU, and then like OPQ it was negative: (Adj R2 0.54; Dependent Variable Tmin July 1960=2006)
    #1 Intercept set at 0; 1st differences of RF and all other variables
    Tmin Coefficients t Stat P-value
    Adj R2 0.54
    dAVGLO -0.00052699 -0.700504939 0.487879215
    dH2O 4.51171828 7.428155148 6.52755E-09
    dRH -0.048537845 -1.38896361 0.172930893
    dAVWS 0.461368278 2.786369886 0.008271971
    dRF -18.49913419 -0.651741607 0.518490571
    dTOT 0.534587743 1.28946864 0.205028785
    dOPQ -0.823960411 -1.424652266 0.162419238
    dTAU -6.473953906 -1.53685917 0.13261355

    #2 Absolute RF, 1st differences for all others.
    RAdjR2=.55 Coefficients t Stat P-value
    Intercept -1.773208398 -0.13500869 0.893336799
    dAVGLO -0.000488581 -0.634314798 0.529778026
    dH2O 4.520517936 7.295976214 1.14733E-08
    dRH -0.048497097 -1.359099478 0.182342206
    dAVWS 0.45725584 2.712208284 0.010080598
    dTOT 0.501954286 1.193550476 0.240250731
    dOPQ -0.782868662 -1.32850411 0.192147753
    dTAU -6.150860845 -1.438031873 0.158829908
    R.F. 0.317643048 0.134203849 0.89396874

    Note: I use absolute RF (= 5.35*((lnCO2t)/(ln(280)) to give CO2 its best chance, especially as it is the total concentration that matters, not differenced changes therein, even though clearly it is not then a stationary variable; first or second differences reduce its significance even further.

    Evidently neither surface solar radiation nor RF play well at Pt Barrow, but then the sun hardly does much there even in July – the max T reached 10oC only once between 1990 and 2006, and the rising RF from increasing [CO2] clearly did nothing to warm Pt Barrow in that period. As the other GHG Bart complains about are collectively less than half the [CO2] component of RF (at less than 1 W/sq.m in 2005 of a total RF of 2.63) it seems hardly worth bringing them in.

    Similar analysis at Hilo (near Mauna Loa) shows the RF remains nugatory, and SSR (“AVGLO”) only becomes significant with annual data. Curiously the trends for both Max and Min T at Hilo appear to be down not up.

    Here are the results for Mean Min Temps July 1960-2006 at Hilo (which is the smallish coastal town at the foot of Mauna Loa volcano where the atmospheric level of CO2 has been measured since 1958). Not surprisingly, Relative Humidity proves to be significant at Hilo unlike at Barrow
    Adj R2=.50 Coefficients t Stat P-value
    Intercept -6.134288504 -0.687474786 0.495958458
    RF 1.112332724 0.6916983 0.493331595
    dAVGLO -0.000142251 -0.419898285 0.67692449
    dH2 1.949565126 4.832036925 2.24049E-05
    dTOT -0.959141223 -2.887802382 0.006369984
    dOPQ 1.148820878 3.331364883 0.0019328
    dRH -0.079159467 -2.396252796 0.02159141
    dTAU 3.025541608 0.750188316 0.457761293

    It is true, Bart, that I left out volcanoes, but they hardly belong in a time series analysis, it is 17 years since the last of any consequence, but perhaps Mauna Loa will be the next, it is far from dormant!

    More generally, Bart, it seems to me (and VS) that you climatologists make the mistake of working only with aggregates like TSI and Global Mean Temperature, without even distinguishing between maximum and minimum, and NEVER do the monthly analysis BY location that I do. And again I ask you to point me to any regression analysis in that Chapter 9, misleadingly entitled “Understanding and Attributing Climate Change” (AR4, WG1). Had its myriad authors (led by Hegerl with her links to CRU) and David Karoly done some regressions, they would not have been able to reach their conclusion of 90% likelihood “that humans have exerted a substantial warming influence on climate…”. Well, where’s the evidence for that at Barrow and Hilo where their atmospheric CO2 concentration is actually measured?

  56. Bart Says:

    VS wrote:

    “it’s hard for a non-physicist to tell who’s closer to the truth”

    That’s the crux of the matter as I see it. I addressed exactly this question in an older post. I think the common sense ‘hints’ that I assembled there can go a long way to separating the wheat from the chaff in the popular debate. For health issues, or any complex scientific subject with societal relevance, it is very similar.

    Btw, I wrote a new post outlining my thought on the ‘random walk’ hypothesis, argued mostly from a physical perspective.

  57. VS Says:

    TIm:

    “Had its myriad authors (led by Hegerl with her links to CRU) and David Karoly done some regressions, they would not have been able to reach their conclusion of 90% likelihood “that humans have exerted a substantial warming influence on climate…”.”

    I was already wondering where on Earth they got that probability of 90% from.

    A hypothesis is never true with a ‘probability’, this is the first thing you learn in statistics.. it’s either true, or false… the only ‘probabilities’ in statistics are (if you did everything correct) the probabilities with which you make a Type I / Type II errors. A big conceptual difference.

    Do you, or anybody else here, have an idea how they arrived at this ‘probability’?

    Bart, you are quoting me out of context here, ts ts ts. The full quote is:

    “Also, given the tone of the debate, as evident on various blogs (and escalated by individuals such as Harpern), it’s hard for a non-physicist to tell who’s closer to the truth.”

    ..and it related to discussions of the greenhouse effect conjecture and fundamental physics.

    How about you respond to the validation issue we spent a couple of days discussion before running off to another thread.

    In particular this post right here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1284

  58. Tim Curtin Says:

    VS Many thanks for your kind comments. It is ideed weird that the IPCC’s AR4′ conclusions use statistical terminology without ever using any modern methodology. As you said, “cointegration and unit root testing is widely taught, and should be a standard part of the toolkit of anybody wading into the analysis of time series” but are nowhere to be found in mainstream AGW literature.

    As for my regression results, as I have explained in my long post responding to Bart, I have taken the liberty of not differencing the RF variable, to give the true believers in their best shot, as it only get worse for if you do! You asked could I post my full specification? That will be in my paper if I ever get to write up my results! I just used others’ claim that temperatures are I(1). For the SSR series, I would go to I(1), but as there appears to no evidence of multi-collinearity in my differenced data regressions (unlike in the absolute values sets), I have just done as above, pending further tests.

    I look forward to your response to Bart on random walks.

  59. Heiko Gerhauser Says:

    Hi Tim,

    the conclusion of (at least 90% probability that at least 50% of warming over the last 150 years is net anthropogenic) is fine. It’s based primarily on points 2 and 3 of Scott’s list (doubling CO2 gives 1K, water vapour feedback adds at least another 1K) and a tally of the forcings. In principle, it’s of course possible that negative feedbacks we poorly understand act thermostat like. But, I think it’s quite reasonable to demand good evidence for these purported feedbacks and in the absence of that evidence assume the simple physics of radiation and humidity dependence at constant relative humidity hold.

    I actually agree with you on the station issue, and disagree with the IPCC here. Their error band is in my opinion too small at just +/- 0.2C. But I don’t see that affecting the above statement of likelihood. Maybe temperature is only up 0.3C compared to 150 years ago, but then I’d think 100% or more than 100% of that is due to net anthropogenic forcings in all likelihood.

  60. Heiko Gerhauser Says:

    Hi VS,

    I think you are too focused on the need to validate the theory (points 2 and 3 of Scott’s list) with statistical methods against the temperature data. Let’s presume for the moment that the data are not good enough for that. I actually think points 2 and 3 of Scott’s list are very strong indeed, and the theory that needs validation is the one about negative feedbacks, and not just that these negative feedbacks are there now, but also that they’ll persist in the face of stronger forcings.

    Thermostats are often explained with central heating systems, but when it gets too cold, the heating will first no longer maintain a constant temperature and eventually it may break down all together due to frozen pipes.

  61. Bart Says:

    See here the IPCC guidelines on assessing and communicating the uncertainties.

    VS, Tamino is a professional time series analyst; he sure know what he’s talking about. I refrain from commenting on the statistical details, because I lack the background. Instead, my reasoning is based more on physics.

    For lack of time, I’ll just mention these two links that seem relevant to the testing of models:
    http://www.realclimate.org/index.php/archives/2008/01/uncertainty-noise-and-the-art-of-model-data-comparison/
    http://www.thebulletin.org/web-edition/roundtables/the-uncertainty-climate-modeling

  62. VS Says:

    Hi Tim

    Yeah, the non-differencing was the first thing that caught my attention:

    “Note: I use absolute RF (= 5.35*((lnCO2t)/(ln(280)) to give CO2 its best chance, especially as it is the total concentration that matters, not differenced changes therein, even though clearly it is not then a stationary variable; first or second differences reduce its significance even further.”

    You ought to difference it though, as, like you state, it is not stationary. I think BR do a very good job at arriving to their specification :) Try applying their method to your local data. If I manage to reserve some time, I’ll email you about it, perhaps we can take a look at it together.

    As for Bart’s new post, I really have to collect some energy to dive into it again. I also thought I made it quite clear here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1226

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.

    However, we are looking at a very, very, small subset of those observations, namely the past 150 years or so. In this subsample, our record is clearly observed a random walk. For the purpose of statistical inference, it has to be treated as such for any analysis to actually make sense mathematically.”

    I also don’t understand Bart’s problem with ‘unboundedness’. The whole point being that the variance of the error in the random walk process is limited, hence temperatures are de facto bounded on the very (very) small interval we are looking at (i.e. the bounded, glacial-interglacial, cycle is 100,000 years, our sample is a bit over 100 years… jeez, how complicated is this to understand?)

    Heiko,

    I understand where you are coming from, but the whole point is that the burden of proof is on THEM, not me or anybody disputing it. We have established, so far:

    1) There is no rigorous empirical proof that CO2 is (significantly) influencing temperatures

    2) GCM model outputs have not been formally tested for their ‘fit’ so those graphs make no sense (see my previous reply to you and Bart). Furthermore, they seem to perform rather badly in prediction, another red flag (I can’t find the reference right now, but some Greeks did a GCM prediction evaluation, and the outcome is that GCM’s are pretty bad at it)

    3) The man-made global warming is a phenomenological, rather than fundamental, model, so given 1 and 2, it simply hasn’t been validated, ergo any conclusions need to be treated as hypothesis rather than fact. A

    Allow me to elaborate here on 3. As far as I’m familiar with results from chaos theory, these imply that while we may understand all the individual components of a process (already a long shot in the case of climate, but OK), the aggregated effect of these components can still result in unpredictable behavior. This is furthermore a mathematical/physical result.

    So, just knowing the physical basis of a system, doesn’t mean that we can simply aggregate and extrapolate (not to mention aggregate and extrapolate AND leave out over half of relevant factors, e.g. clouds)

    Putting these three together, we simply cannot claim any certainty, and it is up to those making this extraordinary claim (i.e. that a trace gas will devastate the stability of our climate) to come up with some extraordinary evidence.

    So far, they have failed.

  63. VS Says:

    Bart,

    “VS, Tamino is a professional time series analyst; he sure know what he’s talking about.”

    That’s an authority fallacy.

    Please invite Tamino to come over here and clarify himself. I wrote down quite clearly why his comment is simply wrong in light of ADF test results.

    You might also want to compare what Tamino wrote with what is written here:

    http://en.wikipedia.org/wiki/Unit_root

  64. Bart Says:

    VS,

    Now you’re making some very dubious claims.

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). These findings provide ”direct experimental evidence for a significant increase in the Earth’s greenhouse effect that is consistent with concerns over radiative forcing of climate“. See also Scott’s points in his most recent comment.

    There is a lot of fundamental physics involved, and parameterized in climate models. Do as you claim, and refrain from stating a strong opinion (like claiming that a whole scientific field is wrong; note that such an extraordinary claim needs extraordinary evidence) about the physical nature of climate. Some humility would suit you.

    Perhaps read up on the history of climate science first before making such strong and unfoudned pronouncements. http://www.aip.org/history/climate/index.html

  65. VS Says:

    Bart, what claim are you referring to exactly when you write:

    “Now you’re making some very dubious claims.” ?

    As for:

    “There is a lot of fundamental physics involved, and parameterized in climate models. ”

    Of course they are, as they should be. The point is that these results are not derived directly from fundamental theory, hence the models are phenomenological.

    See my comment here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1232

    and here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1236

    Question: If the GCM models were fundamental, how on Earth could you have differently parametrized GCM models describing the same system?

  66. Heiko Gerhauser Says:

    Hi Tim,

    there’s a reason to use the world average and not local temperatures. As mentioned in an earlier comment, local weather depends quite a bit on the direction of the wind, so that the difference between the winter averages of two years can easily be 6C. That just makes it that much harder to see any signal at all, unless local temperatures go up by like 5C and you have decades of data.

  67. VS Says:

    OK Bart.

    Let me state “So far, they have failed.” as an opinion then. Apologies.

    So far they have failed, in my eyes. My arguments are listed above.

    Now you stop calling cointegration and unit root analysis a ‘funky statistical method’. That’s Rabbett-speak, and I actually like the civilized tone of the discussion we are having here :)

  68. Heiko Gerhauser Says:

    Hi VS,

    what you are saying basically boils down to us not having certainty that thermostat like feedbacks negate the anthropogenic warming. Ok, so we don’t. But neither do we have much evidence for these presumed negative feedbacks, and it’s also clear that at some stage they’d be overwhelmed.

  69. Bart Says:

    Your claims 1,2 3 are dubious, though there’s room for interpretation by what you exactly mean. E.g. there will hardly ever be 100% mathematical proof for anything in nature.

    Thye Greek’s work is discussed here: http://www.realclimate.org/index.php/archives/2008/08/hypothesis-testing-and-long-term-memory/

    I’m starting to wonder, is your search for climate related information best characterized as a ‘random walk’, or are you specifically searching out research that comes to a particular conclusion?

  70. VS Says:

    Hi Heiko,

    Yep, the evidence is flimsy on all sides. However, this is not the picture painted by the IPCC.

    Also, purely out of interest, how are you so certain that:

    a) these feedback mechanisms will be ‘overwhelmed’
    b) and if (a) is true, that we’re very close to this happening

    Bart,

    My search for information is not a random walk, and I have argumented all three claims in this 40 something page discussion. Instead of simply ‘stating’ that I’m wrong, why don’t you tell me, with regard to, respectively:

    1) Where is the empirical proof (i.e. regression analysis)

    2) Where is the formal comparison of outputs with the data, as per my demands here: https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1284

    3) Are we still discussion the phenomenological nature of the GCM’s? How about you answer the question I just posted above here, namely: “If the GCM models were fundamental, how on Earth could you have differently parametrized GCM models describing the same system?”

    As a side note: you keep linking to realclimate, and to be honest, I don’t find Michael Mann’s personal blog the most reliable source on the internet. Especially as they are known to censor the comments section heavily.

  71. VS Says:

    Bart dammit! (excuse the agitation :)

    The post you link to on realclimate is taking Tamino’s blog (we discussed above) as a serious reference. How on Earth can I then take it seriously? Did you compare what Tamino wrote with what is written on unit roots (the definition is given in the wiki link above)?

    His claims are simply wrong. ‘Long term memory’? No, it’s in fact ‘perfect memory’ on our subsample, as the series contains a unit root.

    I feel am now writing this down for the 10th time: Calculating a deterministic trend on a process containing a unit root is misspecification. Hence it is meaningless. That discussion at realclimate is simply flawed in its postulates.

    Besides, I don’t appreciate the ad hominem’s lodged by the author at the reviewers of the Greek paper I was referring to, namely (thanks for the reference ;):

    Click to access 2008HSJClimPredictions.pdf

  72. Heiko Gerhauser Says:

    Hi Bart,

    a while back I had a discussion with James Annan about the heat balance. I asked whether direct measurements of radiation coming and radiation going out were good enough to come up with the balance 0.85 W/m2 by which the globe ought to be warming according to Hansen.

    His answer was that the data were not good enough.

    Now you can also look at ocean heat content, because that’s where virtually all of the 0.85 W/m2 should be going.

    But on Roger Pielke’s blog it’s basically argued that these data are also too poor.

    http://pielkeclimatesci.wordpress.com/2010/01/04/guest-weblog-by-leonard-ornstein-on-ocean-heat-content/

  73. Bart Says:

    VS,

    The large chages in climate in the past can only be explained by a climate sensitivity of around 3 (+/-1) degree per doubling, which includes positive feedbacks. Many of these positive feedbacks are also clear from modern observations (eg of water vapor) and theoretical modeling compared with measurements (eg of the carbon cycle). Sure, there is uncertainty, but don’t confuse that with knowing nothing. Certain values have the climate sensitivity have much stronger evidence behind them than others.

    For changes in past climates, see eg this very good presentation: http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml
    (towards the end he talks about climate sensitivity)

    Other evidence for climate sensitivity not being much smaller or greater than three: http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html

    RC is a blog of a group of climate scientist; it’s not Mann’s personal blog. Besides, you’re dismissing Mann and Tamino very lightly, smells also like ad hom. Re RC’s review of Koutsiyanis, I think they took issue with them relying very strongly on long discredited arguments from deep inside the “skeptical” corner. If you take issue with such statements, than best is to follow the trail back in time, backed up with a solid grasp of the scientific knowledge.

  74. Bart Says:

    Heiko,

    There are different estimates of ocean heat content, and while some go down to 700 m depth, others go down to 2000 m. The former seems to have flattened since 2004, whereas the latter has continued increasing.

    See resp
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
    and
    http://www.skepticalscience.com/empirical-evidence-for-global-warming.htm (last fig)

    Both exhibit short term variability of course, so I wouldn’t conclude too much from apparent plateau’s (esp if they were preceded by a strong increase).

    I came acros this link as well, which discusses the random walk concept:
    http://www.skepticalscience.com/The-chaos-of-confusing-the-concepts.html

  75. Heiko Gerhauser Says:

    Hi VS,

    I am not sure we are “close” to the point, but then neither do I see much evidence for strong, resilient negative feedbacks in the first place. Lindzen has been trying to come up with something, but I don’t find it that convincing.

    Look at the temperatures of the other planets in the solar system. Jupiter may have a very chaotic atmosphere with the potential for thermostat like feedbacks just like Venus or Earth, but it’s rather colder. You can also do some back of the envelopes with assumptions about how reflective clouds can get or how low relative moisture might get. This does leave some room.

    On the other hand, there’s also room for a runaway towards 100C plus once the ice sheets have melted.

    On realclimate and Tamino, I can understand why you may feel annoyed by them, but consider how you come across to Bart too. Having talked to him in person yesterday I positively know that you could improve on that score.

  76. VS Says:

    Hi Bart,

    I’ll check the links you posted later, but let me respond in short:

    My issue with Mann has to do with his dubious reconstructions. As I stated I also side with the Wegman report on it.

    See my post here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1230

    My issue with Tamino is elaborated here (and a bit further, again):

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1284

    These are not ad hominem’s, just strong disagreements, based on arguments:

    Posted on that RealClimate posts however, are unfounded attacks like these:

    “…touching all the recent contrarian talking points (global cooling, Douglass et al, Karl Popper etc.) but is not worth dealing with in detail (the reviewers of the paper include Willie Soon, Pat Frank and Larry Gould (of Monckton/APS fame) – so no guessing needed for where they get their misconceptions)”

    I really, really, detest this type of ‘discussion’. It is based on insults, insinuations, authority fallacies (e.g. we’re the ‘scientists’, you’re a ‘layman’, so shut up), and many other unproductive statements.

    This MUST stop. The debate is being poisoned by individuals. I understand the pressures, and the fact that many of these individuals strongly believe that they are ‘saving humanity’ and must ‘take action NOW’, but this tone isn’t helping at all.

    Surely you must agree with me here.

    Also, as a side note: I think that the word ‘discredited’ is used too loosely within the climate science community. Apparently, as soon as a paper disputing a specific result appears in a climate science journal, that result is ‘discredited’.

    That’s not how science works. The opinions of three/four reviewers, who often know each other and the author(s), are not sufficient to ‘discredit’ something, especially if many scientists disagree with them.

    Sorry.

  77. VS Says:

    Hi Heiko,

    “On realclimate and Tamino, I can understand why you may feel annoyed by them, but consider how you come across to Bart too. Having talked to him in person yesterday I positively know that you could improve on that score.”

    I can imagine, but trust me, I’m trying to keep it ‘cool’. Try to assume good faith :) The debate is heated, but I think we are keeping it quite decent.

    Keep in mind that while Bart feels his discipline is under attack, my discipline is, in my eyes, being completely abused here by various individuals.

  78. Heiko Gerhauser Says:

    Hi Bart,

    in the guest post on Pielke’s blog it’s basically argued that there’s also a possibility that ocean heat content below 2000 m might change and that that is really poorly sampled. I don’t really understand this particular issue that well, my feeling is that ocean heat content measurements are supportive of warming over the last 30 years and mildly supportive of a continued radiative imbalance, but that the uncertainties are large.

  79. VS Says:

    Hi guys,

    I might be ‘contradicting’ myself here, but I just bumped into this new paper. So in the spirit of open and fair discussion, here goes:

    Click to access wip04.pdf

    It basically says the opposite of BR.

    They actually use a dynamic panel set up (so that’s both cross-section and time element) to estimate the warming etc effects. They don’t use averaged temperature measurements, but individual weather stations (hence the panel dimension), and they include aerosols in their analysis. I haven’t read it carefully yet, but I don’t think they use SSR in this case.

    Tim, this could be a very interesting addition.

    However, they, for some strange reason, don’t even mention unit roots, which, just as in regular time series, are a severe problem in panel datasets. This is especially strange, because Jan Magnus is a good econometrician (!).

    Once I find the time to read the paper carefully, I will email him to ask him about it, and I’ll keep you posted.

  80. Bart Says:

    VS,

    Does telepathy exist after all? I was just going to post a link to that same paper by econometrists from Tilburg University. Just went very diagonally through it. They find a higher climate sensitivity than most, the reason of which is not clear to me yet. I’m curious about your opinion on this analysis indeed!

    Re the debate being poisoned: From where I’m sitting, it is poisoned by the likes of Soon, Monckton and their like. It’s not about who is a layman or not; it’s about cherrypicking and twisted logic to arrive at unfounded claims. There ARE a lot of empty talking points going around, and they keep resurging irrespective of their flimsy nature. That is what is poisoning the popular debate.

    A lot of these talking points resemble the argument along the lines of “I see a bird flying in the air. Therefore the theory of gravity is wrong“.

    Now apparently few people see gravity as a threat to their way of life. For AGW, that appears to be different.

  81. VS Says:

    Hi Bart,

    I couldn’t resist, and I read it (perhaps I should switch to climate econometrics, it seems it’s eating up most of my free and some of my not-so-free time recently ;)

    I’ll run my comments past some fellow (and especially senior ;) econometricians as well (a few of them specializing in exactly this kind of analyses, namely dynamic panel series).

    If they endorse my concerns, I will email Magnus personally.

    In particular, the first thing that caught my eye is that they find a dangerously high autocorrelation coefficient (the ‘persistence’ we were talking about earlier), namely 0.91 (this the autoregressive coefficient beta1, listed in equation 11 of the paper).

    In light of the unit roots (corresponding to perfect persistence) found in temperature series, this should raise some concerns. Keep in mind that if a series contains a unit root, regular inference is invalid (hence the ADF’s), so coming up with such high persistence is in fact what you will find in such series.

    Allow me to illustrate.

    The ADF test statistic, on, say, the CRUTEM3 data doesn’t reject I(1) for all possible alternative hypotheses (i.e. no intercept, intercept, intercept and trend, again AIC lag selection, MacKinnon (1996) one sided p-values). I(2) however, is clearly rejected. This is the basis of the conclusion drawn by most authors (references in first post) that temperature is in fact I(1).

    However, if we simply choose to ‘ignore’ these test results, and go ahead and estimate the temperature series as an AR(1) stationary process, we get the following estimation results:

    Variable, Coefficient, Std. Error, t-Statistic ,Prob

    C, -0.048767, 0.112268, -0.434377, 0.6648
    AR(1), 0.864812 ,0.046451, 18.61792, 0.0000

    R2=0.73

    If we repeat this excercise with a simple deterministic trend, we get:

    Variable, Coefficient, Std. Error, t-Statistic, Prob

    C, -0.788446, 0.082953, -9.504749, 0.0000
    @TREND, 0.007374, 0.000812, 9.086511, 0.0000
    AR(1), 0.547613, 0.074577, 7.342909, 0.0000

    R2=0.78

    Note how, when ignoring the established I(1) property of the series, our (spurious!) estimate leads us to conclude that the persistence term is in fact equal to 0.86 (and significantly different from 1!). If we include a deterministic trend, it even drops to 0.55.

    This simple analysis, however, doesn’t constitute either a rigorous proof, or a ‘refutation’ of Magnus et al, or even strong verification of my own ‘hypothesis’.

    In my eyes, it does however raise somewhat of a red flag.

    Hence, I’ll investigate further, and I’ll get back to you guys here.

    PS. I also found it curious that they basically ignored the (in the literature) established I(2) property of GHG’s… but that’s a different story altogether.

  82. Tim Curtin Says:

    Thanks VS for link to Magnus.

    After a quick scan my first reaction is that the paper has great interest but some basic misconceptions. Magnus et al say early on: “When we observe an increase in temperature, we observe only the sum of the [warming] greenhouse effect and the [cooling] radiation effect, but not the two effects separately.”

    Perhaps this explains the apparent negative effect on temperature of “Global” solar surface radiation in July at Hilo and Barrow, but it seems odd not to allow for any warming effects from changes in SSR other than from more or less dimming from aerosols, unless and until the local “dimming” effects are documented in full.

    Magnus et al go on “Our purpose is to try and identify the two effects. This is important because policy makers are successful in reducing aerosols (which has a local benefit) but less successful in reducing CO2 (which has a global, but almost no local benefit). Reducing aerosols will cause cleaner air, but also more radiation (‘global brightening’), thereby reinforcing the greenhouse effect.”

    However, they ignore the very large local and global benefits from rising atmospheric CO2 in terms of the well-attested growing NPP associated with that (as I have documented in my peer-reviewed paper “Climate Change and Food Production”, 2009, at my website). This effect stems from the increased partial pressure of atmospheric CO2 resulting from the higher atmospheric concentration (Lloyd and Farquhar, passim). Reducing that concentration from today’s 389 ppm to 350 ppm as proposed by Hansen and – in effect – CoP15 must cet.par have a negative impact on the growth of NPP associated with the average annual 57% biotic uptake of CO2 emissions since 1958, now at around 6GtC pa from emissions of over 10 GtC pa. Reducing that incremental uptake to less than 2GtC pa (as implied by a 60% reduction in emissions from 2000 levels) will hardly have a positive effect on NPP and world food production.

    More generally, I see that Magnus et al at no point test for auto-correlation (Durbin-Watson) or unit roots etc. Another random walk anyone?

  83. Bart Says:

    VS,

    I notified Tamino of your “invitation”. Even though I’m happy to host this interesting discussions here (the statistical details of which go over my head), a more efficient way of communicating with him may be to go over to http://tamino.wordpress.com/ yourself and bring it up with him directly (the subject has already been brought up in the latest thread).

  84. VS Says:

    Hi Bart,

    I left a short reply there, it’s ‘stuck’ in moderation. I’ll repost it here, together with Tamino’s ‘reply’:

    “VS // March 9, 2010 at 2:31 pm | Reply

    Hi Tamino,

    I find it interesting that you claim that ‘I’ personally failed my ‘ADF’ test. You might dispute my test results (posted on Bart’s blog), but are you also claiming the same for all these studies as well?

    ** Woodward and Grey (1995)
    – confirm I(1), don’t test for I(2)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – confirm I(1), (they state however that temperatures are ‘in essence’ I(0), but their variable GLOBL is confirmed to be I(1))
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    …I’m sure there are others.

    Temperature may be ‘bounded’ over it’s long 100,000 year cycle (as observed over the past 500,000 or so years), however, on the subset of a 150 years or so, on which we are formally studying it, it can be easily classified as a random walk.

    Keep in mind that the limited variance of the first difference errors de facto keeps it bounded over this period.

    You are however welcome to hop over to Bart’s blog and respond.

    And:

    “There are so many bonkers theories from so many bonkers commenters, we’ll just have to take ‘em one at a time.”

    Let’s try to keep it civilized, OK?

    [Response: There’s nothing uncivilized in calling your claims bonkers, because they are. Frankly, the label is better than you deserve. As for failing your ADF test, you just plain got it wrong.

    But as I said before, you are not important enough to deserve a distraction from my present efforts. I’ll get around to you, but in the meantime you can wait.]”

    If this is another Halpern/Rabbett like character, I seriously have no interest in engaging in a discussion.

    PS It’s very interesting how he claims I, together with all these other authors ‘got the ADF test wrong’, without naming a single argument, because he’s ‘too busy’.

    Incredible, the level of the ‘debate’ on these blogs… I’m actually starting to wonder if all of this is worth MY time.

  85. VS Says:

    Note:

    My comment just got out of moderation, and he added this to his ‘reply’:

    “It’s either a complete failure of understanding on your part, or dishonesty, that causes you to misrepresent the work of Kaufmann & Stern. As for Beenstock & Reingewertz, their claims are loony.”

    Wow.

    Actually, I know enough already.

  86. VS Says:

    PS. Bart, tell me please, WHAT ON EARTH is wrong with this entire clique?

    How come NOBODY can discuss NORMALLY with somebody who they disagree with?

    Is this a ‘scare tactic’ or something? Are they (or perhaps I should say ‘you guys’, because you do tend to endorse these individuals) trying to keep the reasonable people out of the debate?

  87. Marco Says:

    @VS: what is ‘wrong’ with these people is that some math, which may or may not be accurate or even relevant for the situation at hand, supposedly trumps observations and well established physics.

    Stuff like a cooling stratosphere with a warming troposphere simply does not fit with global temperatures being a mere random walk (and most certainly not with solar influence being the driver). The enhanced greenhouse effect *does* explain both observations (cooling stratosphere, warming troposphere).

    When somebody then comes along and claims “you are wrong, they are right, just look at the math”, they wonder how somebody can just make these claims and thereby neglect observations that do not fit said claim. It should make a statistician or mathematician a bit more humble if their math makes a claim that essentially contradicts an observation.

    And Tamino *is* busy. He’s writing a paper that should put Anthony Watts to shame. SWatts is getting credibility from so many people, his false claims require immediate attention.

  88. VS Says:

    Marco,

    You write:

    “It should make a statistician or mathematician a bit more humble if their math makes a claim that essentially contradicts an observation.”

    Actually, statistics is the discipline that formally deals with observations. I also gave an explanation about the ‘random walk’ interpretation in numerous posts in this thread.

    Finally, when I said ‘ what on Earth is wrong’, I was talking about the tone. Are you endorsing that tone? Judging by your earlier posts in this thread, I think you might be.

  89. jfr117 Says:

    VS you are not alone in your shock at how shrill these blogs become when you ask a question. it’s really quite sad for science. tamino is the worst one and an examle of elitism that makes his message impossible to swallow for many people.

  90. Marco Says:

    @VS:
    What tone do you expect when you walk into a room and say “Oi guys, you’re all wrong, I’m right!” A cheering reception?

    Do remember that it is not the first time someone runs in and makes large claims. Somewhere the kindness stops. Endorsing the tone is not the right word, I *understand* the tone. It’s a result of many, many, many claims by people of “the ultimate proof” that AGW is a hoax, wrong, fraud, whatever, only to be proven wrong (but not admitting as such).

    Let me in the case of the Israeli’s repeat two problems with their analysis:
    1. The observations fit the physics: cooling stratosphere, warming troposphere. The analysis based on the observations by the Israelis does *not* fit the physics. I’d do some major thinking before essentially claiming “the physics are wrong”.

    2. Based on their analysis, the same magnitude of forcing (in W/m2) for solar and CO2, gives a *different* warming (ca. 1.5 vs 0.5 degrees). Another result that contradicts basic physics. Throw away basic physics? Ah, in this case rather problematic, because several aspects of basic physics (the data in particular) were the *input* for the analysis. A circular argument follows: data input=analysis contradicts basic physics=basic physics is wrong=data cannot be used=analysis cannot be performed!

    Two aspects where the analysis results in direct contradiction to known physics. Do you really doubt the physics? Or do you perhaps take a really good look at the math you chose (using ADF *is* a choice) and maybe check whether it really is suited for the type of data you are analysing?

  91. MartinM Says:

    A hypothesis is never true with a ‘probability’, this is the first thing you learn in statistics.. it’s either true, or false… the only ‘probabilities’ in statistics are (if you did everything correct) the probabilities with which you make a Type I / Type II errors. A big conceptual difference.

    If you ignore the entire field of Bayesian statistics, sure.

  92. VS Says:

    Marco,

    1) How does it not fit ‘the physics’? Because they reject runaway warming due to CO2? Or are you referring to something else? Explain please.

    2) Errors in functional relationships can also be the cause. The models are phenomenological rather than fundamentally derived, so this is a realistic possibility. Are you aware of some fundamental physical model that violates statistical findings in such a way?

    Now here’s a novel thought, could it perhaps be the case that we do not completely understand how our climate functions?

    Whatever the case, BR have made a valid addition to the debate, especially as they have employed the most proper statistical analysis I have so far seen in this context.

    The poisonous tone is still not justified.

    MartinM

    Are you suggesting that the “90% probability that modern warming is caused by man-made emissions” is derived via Bayesian statistics? Over there I asked for a reference for that calculation, perhaps you can provide it. I’m honestly interested.

  93. VS Says:

    jfr117

    I have no idea where his ‘elitism’ comes from. Perhaps the fact that his ‘fans’ know less about statistics than he does, got to his head.

    I studied under some extra-ordinary statisticians/econometricians (some of whose estimators you can find in popular software packages), and am familliar with both their work and ther modus operandi. None of them would ever display this kind of behavior when challanged on a technical matter.

    Statistics is a very contra-intuitive discipline, and I have been taught that when you feel that somebody, with less formal training in statistics, doesn’t seem to understand what you are saying (and you are convinced that you are right), you do your very best to explain it.

    That’s at least what I tried to do in this thread, in spite of the warnings of a few of my friends not to get my hands dirty on this stuff. It seems that they might have been correct.

    You’re right, this has nothing to do with science anymore.

  94. Timo van Druten Says:

    VS / Tim Curtin/Bart,

    I think you can forget the paper from Jan Magnus et al.

    http://climategate.nl/2010/03/09/four-degrees-warming-in-2050-oops-you-used-the-wrong-dataset/

  95. jfr117 Says:

    tamino is a caricature of all that is bad about climate science. he may be smart, but that becomes moot by turing off everybody but your followers. what’s the point of preaching to the choir?

    if tamino is your prophet, i ain’t buying your religion.

    anyways, give blogs such as this one and Scott Mandia’s credit for engaging people with questions in reasonable tone. there should always be questions since we do NOT understand the climate system very well. it is full of non-linear feedbacks that we cannot quantify or understand based on a 30 year warm period!

  96. Scott Mandia Says:

    jfr117:

    I appreciate your comments because I do recall I drifted into a poor tone with you on one comment at my blog and I felt bad.

    I try to be civil. :)

    Regarding Tamino: When I first ventured into the fray about a year ago, I also thought Tamino’s tone was meanspirited. However, after seeing the countless false claims repeated over and over again, I understand why he has no patience for it anymore. I am new and not too cynical yet.

    I am still a huge fan of his work which speaks for itself, IMO.

  97. Marco Says:

    @VS:
    What you apparently fail to understand is that B&R claim that, based on their results, the same magnitude of a forcing will result in *different* warming if the forcing is solar or CO2. Please explain us the physical reason for that difference. It’s like saying that putting a 50 kilo box of feathers on your stomach would be less heavy than putting a 50 kilo box of lead there.

    Moreover, their results also state that the reason for warming is solar, not CO2. More of a problem there, since a solar influence on warming should not yield a cooling stratosphere. Yet, the observations show it *is* cooling.

    Two violations of known physics. I’d say there’s something really, really fishy with the math of B&R if their results contradict observations and physics.

  98. jfr117 Says:

    if tamino can’t handle questions (VS’ question wasn’t even questioning tamino’s work per se) then he shouldn’t blog. what is the point of his blog if not to educate and provide a place for discourse?

    if tamino’s work satisifies himself and others, then swell. but just because tamino is satisfied with it does not mean everybody is satsified with it.

    this higher level statistics has a place in this debate, but it is not the be-all, end-all. we are still talking about physical system. while tamino throws out the past decade of plateaued temps as noise, i see it as a possibility to learn something new about the climate system.

  99. Bart Says:

    VS,

    Even though your question “what on earth is wrong with this entire clique” sounds rather like a rhetorical question, I’ll answer anyway.

    I may have a different way of communicating than many others on either side of the popular debate. But as to the contents, I am firmly on the side of mainstream science, because there is a coherent framework of understanding based on looking at all the evidence in its totality. From my PoV, the most damage to public understanding is done by those who spread misinformation (with Morano and Watts as prime exponents); much more so than by supporters of science who let their frustration shine through in their language (a frustration which I, like Marco, understand all too well, even though I try to remain civil, see my views on communication eg here and here).

    Let me ask you the same question that I posed to Tom Fuller: Imagine the hypothetical situation that a brand of science is being strongly criticized / attacked, but that the criticism by and large doesn’t make a lot of sense. And in those instances where the critics do have a point, it’s not relevant for the bigger scientific picture: It stands rocksolid. Again, I ask you to imagine the hypothetical. Think of e.g. evolutionary biology being criticized by creationists; epidemiology being criticized by tobacco apologists; vaccine researchers being criticized by antivax-ers, etc. The arguments of the critics are complete bogus, but packaged such that it’s difficult for the layperson to discern who is talking real science and who is merely setting up a plausible sounding bogus story (whether intentional or born out of confusion). Evolution has been “refuted” countless times by creationists (not). How would you advise the scientists (and their supporters) to respond?

    On a post about false claims at falsification of AGW, Robert Grumbine commented that “Immanuel Velikovsky thought that clouds had their own anti-gravity system, or at least proved that gravity didn’t work as Newton or Einstein said.” On his wikipedia page it sais about Velikovsky that he gained “enthusiastic support in lay circles, often fuelled by claims of unfair treatment for Velikovsky by orthodox academia”.

    Hmm, that rings a bell, doesn’t it…?

  100. sidd Says:

    “(i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent.”

    Let me see. I take a diode, biased in the exponential region, and put a varying current through it. I measure both the current and the voltage drop across the diode and discover that I is exponential and V is linear, therefore they are “asymptotically independent”

  101. Scott Mandia Says:

    jfr177:

    If you go back and look at the posts from VS you will see that he essentially called Tamino “clueless” but with nicer terminology. Tamino fired back with strong language but he did not tap the bees nest, VS did.

  102. MartinM Says:

    Are you suggesting that the “90% probability that modern warming is caused by man-made emissions” is derived via Bayesian statistics?

    My one and only suggestion is that if you wish to talk with authority on a given subject, it’s probably a good idea to avoid talking complete and utter bollocks.

  103. MartinM Says:

    Two violations of known physics. I’d say there’s something really, really fishy with the math of B&R if their results contradict observations and physics.

    Add to those two the negative coefficient their model assigns to the first difference of methane forcing, which is patent nonsense. It’s not hard to see why they get that result; the growth rate of atmospheric methane concentration has been dropping off for the past few decades, and was increasing prior to that, while temperatures were declining slightly. But that should have been a huge red flag; it should have been clear that they were getting unphysical results because they were missing an explanatory variable necessary to produce a good fit to the observed temperature trends; namely, aerosols, without which it’s difficult to account for the decline. All they’ve really demonstrated is that GHG forcings and TSI alone cannot account for temperature changes over the past century or so. Well, duh. We already knew that. So, they’ve falsified a sucky model nobody was actually proposing anyway. Brilliant!

  104. MartinM Says:

    Hmph. I notice my prior two comments are a little on the hostile side, for which I apologise. I’m ill, and consequently a bit grumpy today.

  105. Arthur Smith Says:

    VS – you say “I simply don’t have the knowledge to properly evaluate them on my own” – perhaps you should try to gather that knowledge, before opining further on the subject. :)

    I may publish my little contribution, however I hardly think it’s very original given the many textbooks on the subject; in the meantime a comment responding to G&T at the journal they published in is more urgent, and in the works.

    On this whole “random-walk” issue – I think it’s very interesting because it suggests something very profound (and worrisome) for Earth’s climate response function. Considering Earth’s average surface temperature as a reasonable metric (something more along the lines of total surface heat content is probably better, but average T is not a bad proxy for that), the standard systems theory analysis from the physical constraints implies that that average T is determined and constrained through a feedback process. Increasing surface temperature strongly increases outgoing radiation and thus creates a strong negative feedback to bring temperatures back down again. There are known positive feedbacks in the system (associated with water vapor, clouds, and ice cover) but the central assumption in all of climate science is that, for Earth, climate is essentially stable, and the negative feedbacks dominate. The Earth has a particular set-point average surface temperature (or more correctly, average surface heat content), with slight variations caused by things like the solar cycle, El Nino-like internal redistributions of energy, and other small changes in the responsible physical parameters.

    But the analysis VS is promoting suggesting something very different – that temperature is not constrained at all, but randomly walks up and down all on its own. That can only happen if the climate system is neither stable nor unstable (since we don’t have a Venus-like runaway either) but right on the cusp of stability, with positive feedbacks exactly cancelling negative feedbacks, at least on the time scale being discussed (decades to centuries?)

    But that means the equilibrium response of such a metastable climate system to a forcing would be not 3 C as the IPCC estimates. It would be infinite. VS’s argument here is for an arbitrarily large climate sensitivity! Not a good thing at all!!!!

  106. Greenhoof » Blog Archive » Lorne Gunther: Denial (and dumb analogies) are us Says:

    […] I invite you all to have a quick read of Bart Verheggen’s great post on this issue. In addition to having pulled together clearer images of the graph at left, he has […]

  107. Heiko Gerhauser Says:

    Hi Bart,

    I may not have a very different view from you on the basic, physical science, but I certainly have a very different view on the framing of the discussion as science being under attack.

    I think it can be very polarising to immediately classify every question about climate change into one of two categories

    A) It attacks all of established climate science, and must therefore be wrong

    B) If it doesn’t overturn all of established climate science, well, it must be a nit pick and unimportant.

    It’s something I see in this thread too, I even engaged in it myself to a point. Once you are at point B, and the other person suggests they do think the issue deserves discussing, this immediately gets taken as a suggestion for A.

    Yet, climate science is a complex issue with many facets and improving it involves critiquing that detail in my opinion. Watts may publish a whole lot of rubbish on his site, and he does, but I don’t think this is an attack on science. In those instances where it’s complete rubbish, it contributes little; but in those instances where he presents valid points, it goes to improve the science.

    I think an attack on science would be book burning, ie actual destruction of information or the like. When you talk about an attack on science, you are primarily talking about the public’s understanding of the climate issue, while I rather think of science as the body of information that is there so that experts with the time to study the issue in depth can get a better understanding of the physical workings of climate.

    Where the understanding of the public is concerned, I think some interesting work reported by Ron Bailey recently helps to illustrate my concerns. He showed statistics that (US) climate scientists self classified as much more “liberal” and much less “conservative” than the US public at large. He also presented evidence that the messenger mattered, and so did the presentation of the message. In short, if Dick Cheney says climate change is “real” and uses it to justify lower taxes and the war on Iraq, people who like Dick Cheney and the policy measues he advocates are more likely to believe climate change is real, than if Al Gore says “climate change is real” and puts the spin on it that this justifies higher taxes and more power for the United Nations.

  108. Heiko Gerhauser Says:

    Hi Marco,

    I haven’t read the paper, so can’t comment on that. But, the two apparently simple physics points you raise I can’t resist. Why shouldn’t an equal solar and greenhouse gas (globally averaged) forcing have an unequal feedback response? As I understand it, the ice ages were precipitated not so much by less solar radiation, but rather by a different seasonal distribution of that radiation, ie by less radiation being available to melt snow in the summar at high latitude. The same sort of thing could in principle apply to solar and greenhouse gas forcing; for example greenhouse gas forcing might cause more precipitation in winter (and therefore more snow) while solar forcing might be more concentrated in summer (causing relatively more snow melt). And as for the stratospheric cooling: If the solar change is amplified strongly by an enhanced greenhouse effect from water vapour, then well, you’d also expect to see stratospheric cooling, wouldn’t you?

  109. Scott Mandia Says:

    Heiko,

    Take a look at this study:

    The Second National Risk and Culture Study: Making Sense of – and Making Progress In – The American Culture War of Fact

    “Individuals’ expectations about the policy solution to global warming strongly influences their willingness to credit information about climate change. When told the solution to global warming is increased antipollution measures, persons of individualistic and hierarchic worldviews become less willing to credit information suggesting that global warming exists, is caused by humans, and poses significant societal dangers. Persons with such outlooks are more willing to credit the same information when told the solution to global warming is increased reliance on nuclear power generation.”

    Simply put, if the solution is palatable then the mesage is correct. If the solution is not palatable, then the message must not be correct. Classic shoot the messenger.

    This study really helped me to understand why the climate change issue is so politically polarizing.

  110. Heiko Gerhauser Says:

    Hi Scott,

    that’s precisely the study Ron Bailey picked up on. What he added to it boils down to: climate scientists are liberal (in the US political sense), present themselves as liberal and connect the science to solutions liberals like. Consequence: They are no longer trusted by half the population.

  111. Tim Curtin Says:

    Bart (at 5th March) you said “notwithstanding the fact that all yearly temperatures of the past 30 years are higher than any of the yearly average temperatures between 1880 and 1910)”. I have previously pointed out here that is a false claim, as the met. station coverage of the land surface area from 1880-1910 was at best 10-20% (I have the NOAA map for 1885 showing no stations in Africa between Cape Town and Cairo, with not many more by 1910, and all that huge area has now and conceivably always has had mean temperatures higher than anywhere else on the planet, so to have as baseline data as you do for average temperatures between 1880 and 1910 a globe that excludes Africa is invalid and seriosuly misleading. Moreover, as GISS etc now report far fewer stations at the high lat. top end of the NH since 1990, “global” average temperature is likely to be overstated. So Bart, how do you justify your opening claims and as quoted above?

  112. Tim Curtin Says:

    Scott, back on 5th March you cited Feulner, G., and Rahmstorf, S. (2010). On the effect of a new grand minimum of solar activity on the future climate on earth, Geophysical Research Letters, in press. with their conclusion “For both the A1B and A2 emission scenario, the effect of a Maunder Minimum on global temperature is minimal. The TSI reconstruction with lesser variation shows a decrease in global temperature of around 0.09oC while the stronger variation in solar forcing shows a difference of around 0.3oC.”
    Please note that all the IPCC emissions scenarios are irrelevant because they assume away the negative feedback through uptakes of CO2 emissions by global biology (photosynthesis) which since 1958 have accounted for 57% of emissions, but which in the IPCC’s main model (MAGICC) are explicitly assumed to be nil, through its author Tom Wigley’s assumption (Tellus 1993) that uptakes follow a rectangular hyperbolic (Michaelis-Menten) function, i.e. reach a peak c2000 and then cease to rise ever again. There is of course no evidence for this, as shown by myself (E&E October 2009) and by Wolfgang Knorr (GRL, November 2009). The MAGICC assumption has the Madoffian benefit of allowing projections of the growth of atmospheric CO2 to double from the actual c.0.41-0.46% pa since 1958 to 1% p.a. from 2000 – and thereby activate politicians via the resulting exaggerated projection of GMT to 2100.

  113. Marco Says:

    @Heiko:
    If you agree with enhanced greenhouse forcing by water vapour (through solar influence), then assigning hardly any greenhouse forcing to CO2 is…ehm…rather contradictory.

  114. Marco Says:

    Ah, look, Tim Curtin repeats Watts’ false claims. Tim, removing high altitude and high latitude stations doesn’t do much to the trend. If anything, it introduces a *cooling* bias.

  115. Bart Says:

    Hi Heiko,

    I agree with you that the discussion has polarized to an unhealthy extent. Both sides of the ‘debate’ have had a part in this. However, if there wouldn’t have been a string of extra-scientific attacks on the science, scientists (and their supporters) wouldn’t have gotten as defensive as they have. Ie the downward spiral of polarization has been set in motion (in my view) by what I call the attacks on science.

    You are right that knee jerk, defensive reaction along the lines of ‘any criticism is by definition invalid and must be scourned’ is unhelpful to put it very mildly, and damaging to the (understanding of) science even. The “must” in your examples A and B are wrong. However, being faced with many bogus statements of having refuted AGW, the “must” can reliably be replaced by “very likely to be”. And more often than not, the person having made the claim of refutation is not open to counter arguments, or to admitting logical flaws in their reasoning, or to a myriad of observations and understandings that are in direct conflict with their statement. If someone claims to have refuted AGW, just as if someone claims to have refuted evolution (and links to the discovery institute as proof), some loud alarmbells go off in my head.

    I wrote more about my views on how to communicate here and here.

  116. VS Says:

    Hi Timo,

    So what it basically boils down to is that they used a dataset with a known, or even deliberate, ‘bias’. Hmm, that doesn’t look good. I’m trying to look into the I(1) matter I posted above. If somebody has a direct reference to a panelized version of the Magnus et al dataset (i.e. the CRU 2.1 temperature set), please do link. I started downloading the data they used, but it will take me hours and hours of data-constrcution to get it in the proper shape, and I simply have no time for that.

    Hi Jfr117 and Scott,

    About Tamino, he wrote down things that were plain wrong, and I still firmly stand by that. I think I even used some formal notation somewhere up there to show/explain it. I.e. he made a ‘clear’ disctinction between an AR(1) process and an I(1) process, while in fact, an I(1) process is simply a specific, non-stationary, realization of an AR(1) process. Put differently, I didn’t simply say that he was ‘clueless’, I gave arguments why his position was flawed. There is a difference between the two.

    What he lodged against me however, was a baseless insult. My reflex in this case is to interpret it as incompetence, rather than malice.

    Hi sidd,

    The order of integration doesn’t have anything to do with linearity/non-linearity of the series, but the underlying stochastic process. Yes, I(1) and I(2) processes are asymptotically independent. Take a look at some of the references in my first post: it’s discussed there. As a matter of fact, it is the whole reason that BR had to resort to polynomial cointegration.

    Hi MartinM,

    Utter bollocks? How about you come up with that reference first, instead of (de facto) reiterating the position that the hypothesis can never be rejected because it’s ‘physical’.

    Hi Arthur,

    I’m sincerely looking forward to your contribution in the GT debate. I firmly believe that the debate should be expanded, and more physicists should be involved. In that context, I believe that the efforts of GT, you and Kramm et al, ought to be lauded. Civilized dispute is progress, ‘consensus’ is non-science.

    I however doubt that my lack of physics knowledge means that all my knowledge of statistics can be thrown out the window.

    As for the random walk/equilibrium comment, you’re not completely on mark there.

    First off, I outlined the ‘random walk’ issue above, and why an I(1) process is de facto bounded on the interval we are looking at. Be careful to make a distinction between how we observe something, and what the actual (unknown) underlying process is. That, too, I dwelled upon in earlier posts.

    Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!

    PS. Note that the whole reason I involved the GT publication here was to ‘dismiss’ Halpern (it had to do with his role in it). I sincerely despise the role individuals like he play in these discussions, and I think I clearly outlined that above. Besides, considering the gibberish he wrote on his blog, I think his efforts amounted to ‘a campaign of misinformation’ that Bart keeps referring to.

    Hi Heiko and Bart,

    Yes, that’s exactly how I see it: your A/B distinction (nicely put, I’ll ‘save’ that ;). Coming from a discipline that is constantly ‘under attack’, I understand the sentiments. However, this level of ‘dismissal’ of anything coming from any other discipline is simply non-scientific. I don’t buy the ‘traumatized scientific field’ argument. We are scientists, our predecessors managed to keep their calm in front of guillotines, exiles, discommunications and burning stakes… that’s our tradition, that’s our pride. In ratio we trust.

    Let’s live up to it.

    Another good example of a field ‘under siege’ is evolutionary biology, some 5-10 years ago. Note how the biologists ‘won’: namely, by repeating their arguments and delivering their actual proof (!) over and over and over again, until it was clear. Add to that they were arguably coming from a discipline that is much older and well established, and has delivered *much* more proof than climate science.

    It’s a tough business, sure, but nobody promised that science would be easy.

    I think this is also the reason I cannot communicate with individuals like Marco. Sorry Marco for getting a tad personal here, but you have not responded to a single methodological concern I have raised. You simply keep repeating:

    “The physics is right, the hypothesis is right, what do you want me to do? Reject my hypothesis? No! [enter here your favorite “you people don’t know s***” one-liner]”

    Yes, I want you to first give in to the possibility that you’re missed something somewhere. As long as you maintain that that can never be the case, we have nothing to talk about, as I really don’t have the time and energy for a virtual spit-fight. Good day.

    Finally Bart,

    I truly understand that coming from your methodological point of view (i.e. as a climate scientist) you believe that you guys have delivered sufficient proof. However, this is not the case when viewed through the methodological lens of other disciplines, such as mine.

    We need some mutual understanding in this case, instead of instant dismissal.

  117. Bart Says:

    VS,

    Indeed, in science “in ratio we trust”. The problem is that much of the criticism of the kind “see here is proof that AGW is bunk” is not very rational, or at the very least leaves out the bigger context of all the evidence in favour of AGW (as if that suddenly magically disappeared). A prime example is that with a very small climate sensitivity, you’d have a terribly difficult time explaining how the great climate shifts in the past could have happened. A bit of modesty in making far reaching claims at refutation of a field which is more mature than many of its “critics” claim would be appropriate.

    In my new post I explained why I think it is evident on physical grounds that the increase in global avg temp is not merely random: A random increase would cause a negative energy imbalance (as Arthur also noted in his latest comment) or the extra energy would have to come from another segment of the climate system (eg the ocean, cryosphere, etc). Neither is the case: There is actually a positive energy imbalance and the other reservoirs are actually also accumulating energy. Moreover, there is a known positive radiative forcing.

    I think it is fair to conclude that the observed increase therefore has not been random.

    A different question would be, if these data, without any physical constraints on it, could mathematically be described as you you do (purely stochastic; random walk). Perhaps it could; I refrain from an opinion on that. I do note though, that in light of the physical system that these data are a part of, this is a purely academic mathematics question. The physics of it all tells me that it hasn’t in fact been random, since it is inconsistent with other observations.

  118. Alan Says:

    VS said to Bart …

    “I truly understand that coming from your methodological point of view (i.e. as a climate scientist) you believe that you guys have delivered sufficient proof. However, this is not the case when viewed through the methodological lens of other disciplines, such as mine.”

    This comment is similar to many others that I have observed in the vast anti-AGW blogosphere … and I wonder at what it really means.

    It seems to me that people cast ‘climate scientists’ as those that have a specific technical expertise … other than the issue at hand eg “climate scientists don’t appreciate feedback mechanisms”; “climate scientists don’t appreciate ‘measurement error'” etc etc

    Here it is “climate scientists don’t appreciate time series analysis”.

    The assumption is invariably that climate scientists aren’t expert in a specific field and should ‘listen to the professionals’ because they are making goofy mistakes and their conclusions are suspect (and maybe dangerous). Do climate scientists all have the same, specific technical expertise?

    I don’t get it. I work in business – not science. And on major projects we pull together a team with experts in specific disciplines … no one subject matter expert is sufficient.

    It must be the same with climate science research, surely!?

    Research and analysis projects would involve a team. I imagine there is consultation with ‘design of experiments’ experts; instrumentation experts; modelling experts; model coders … and, dare I say it, time series analysis experts.

    Bart, is this the way the climate science research community mobilises?

    If so, it is probably pointless discussing the nuances of time series analysis here unless there are at least two recognisable subject matter experts around, including one who works directly in climate research/modelling/data analysis.

    Are there any here? Bart, you do not claim such expertise, do you?

    So if the pressing issue that VS brings here is that he has real concerns about the application of TSA to the climate science research effort, then perhaps it would be more fruitful to take the discussion to the TSA folks who actually are working now in climate science research.

    VS, why not go and check the hypothesis that TSA is being applied in the way you suspect – directly with the other TSA folks. If it is, and you believe it to be inappropriate, engage in a TSA methodology discussion with them.

    For some time now I have wondered about what you (VS) hope to achieve here.

    All the best, Alan

  119. Heiko Gerhauser Says:

    Hi Marco,

    the term “violations of known physics” is far too strong. What you(ought to) mean is that it seems unlikely that solar has a (net) effect on world global average temperature and CO2 does not, while there is stratospheric cooling. In principle you could have stratospheric cooling from CO2 and water vapour and still no net warming impact on the troposphere, because the CO2 in some fashion (direct chemistry say, or due to issues of where the warming occurs) enhances say cloud reflectivity to exactly compensate (or does something else, make tree albedo a little higher, say).

  120. Heiko Gerhauser Says:

    Hi Marco,

    and yes my previous attempt was a bit confused/confusing, sorry. I haven’t yet come across someone claiming that water vapour does cause a forcing, and CO2 causes zero forcing.

  121. Tim Curtin Says:

    Marco said:
    March 10, 2010 at 07:22
    “Ah, look, Tim Curtin repeats Watts’ false claims. Tim, removing high altitude and high latitude stations doesn’t do much to the trend. If anything, it introduces a *cooling* bias.”

    Marco, do think a bit. How was the global mean temperature calculated in 1880? and in 2000? Am I wrong that the known temperatures at various locations as of 1880-1910 were aggregated to get Bart’s baseline, and at rather more locations in 1970-2000 ditto? Am I also wrong that in 1880 New York was not a good proxy for temperatures in now Kinshasa (former Leopoldville), which had no met. station then, as HM Stanley unaccountably failed to establish one when he was there? Am I wrong that to establish a trend between 1880 and 2000 you have to have actual temperatures aggregated to global series for the SAME LOCATIONS in both years? Or do you belong to CRU? Then of course you claim to know that the temperature trend at say Khartoum from 1880 to 1910 was the same as that at San Juan in Puerto Rico? According to Jim Hansen they are indeed perfect matches, as shown below (?). So using San Juan as proxy for Khartoum (they are nearly the same latitude) from 1880 to 1910 is totally kosher? And using San Juan’s annual mean as proxy for Khartoum does nothing to lower GMT in 1880-1910, and using Khartoum actuals for 1970-2000 does nothing to raise GMT then vis a vis 1880-1910 as claimed by Bart (who as ever seemingly like Tamino keeps quiet when inconvenient facts emerge).

    Bart, I really would like your comments, you were quick to point out my error earlier, your turn, nie waar nie? (pardon my Afrikaans).

    SJ Min Khart. Min S.J Max Khart.Max
    Jan 21.6 15 Jan 28.4 32
    Feb 21.4 16 Feb 28.7 34
    Mar 22 19 Mar 29.1 38
    Apr 22.7 22 Apr 29.9 41
    May 23.6 25 May 30.7 42
    Jun 24.5 26 Jun 31.4 41
    Jul 24.9 25 Jul 31.4 38
    Aug 24.8 24 Aug 31.5 37
    Sep 24.6 25 Sep 31.6 39
    Oct 24.2 24 Oct 31.3 40
    Nov 23.3 20 Nov 29.9 36
    Dec 22.4 17 Dec 28.8 33
    Ave 23.3 21.5 30.2 37.6
    Ann Mean 26.7 at San Juan
    and 29.5 at Khartoum

  122. Tim Curtin Says:

    Heiko Gerhauser Said:

    March 10, 2010 at 11:52
    “…I haven’t yet come across someone claiming that water vapour does cause a forcing, and CO2 causes zero forcing”.

    Dear Heiko, do check my regression data above for Pt Barrow (Alaska) and Hilo (Hawaii) showing zero stat. sig. for radiative forcing from CO2 and very strong stat. sig. coefficients for water vapour. Inconvenient, yes, but incontrovertible, given the NOAA data I used.

  123. Heiko Gerhauser Says:

    Hi Tim,

    ok, I was talking about the radiative absorption properties of water vapour and CO2, which are measured in the laboratory. And so far I have come across no-one claiming those are wrong. Do you?

    I don’t see how you can find much from looking for correlations between CO2 or water vapour using local temperatures. Maybe I am wrong, but it seems a bit like pointless wild goose chase to me.

    On the stations issue, however, I think you’ve got a point. Though even there, poor station coverage and quality means the potential error is large, it does not mean 1880-1910 is proven or even likely to be much warmer than indicated in Bart’s graphs above. It might have been quite cold in Africa or Siberia and we missed it.

  124. Marco Says:

    Tim:
    You can make all the objections you want, but it doesn’t change the fact that your claim is based on absolutely no evidence. There are *several* people who used *different* procedures to check for the effect of the stations that were supposedly ‘removed’ (which was lie one: there was retrospective reporting in the early 1990s). Those supposedly ‘removed’ stations actually had a *higher* warming trend. In other words, removing those stations introduces a *cooling* trend.

  125. Marco Says:

    @Heiko (and VS):
    I was maybe getting a bit too short: a cooling stratosphere with a warming troposphere fits with an enhanced greenhouse effect, not with an increase in solar radiation (in which case both should be warming).

    What I cannot see is that a 1 W/m2 forcing would result in a different feedback depending on solar or CO2 influence. If 1 W/m2 of sunlight introduces x warming, and subsequently y feedback warming through water vapor, why would 1 W/m2 CO2 introduce x warming, but much less than y feedback warming? The water feedback, after all, is a result of the initial warming.

    However, it *is* possible that B&R not look (forget) at the concomitant aerosol emissions that go along with fossil fuel burning. Those would introduce a cooling, which does result in a “CO2-associated” lower feedback per W/m2 of CO2 forcing, but then they just describe the whole issue very very poorly. In that sense Jan Magnus did a better analysis, but he was stupid enough to take the wrong temperature dataset (what is it with these econometricians?).

    @VS: If a fancy type of mathematical analysis shows the temperature of the universe to be -2 Kelvin, the math may well have been very solid, but the outcome is rather questionable. This should suggest a major rethinking of the input to the equations (and perhaps ultimately the equations themselves). One may be right, but you’d better come with some very good explanations before we start throwing away basic physical knowledge.

  126. Bart Says:

    Tim Curtin,

    The consequence of what you’re saying is basically that the global average temperature was known with less accuracy in 1900 than in 2000. True enough, and known (see eg the green error bands on this figure: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.pdf)

    That doesn’t invalidate my earlier statement, though it means that the finite probability that there actually was a yearly temp anomaly between 1880 and 1930 that was higher than the lowest anomaly during 1980-2009 is not zero, but some (very) small number. Okay, I’ll give you that.

    But no, you don’t need the exact same stations to compare yearly anomalies. You seem to have bought Anthony Watts’ line of reasoning here, which is totally off the mark. Tempeture anomalies are highly correlated in space, and anomalies are calculated such that it is relatively insensitive to changes in which stations are used (as opposes to the claims of Watts and d’Aleo).

    See eg this and the preceding post: http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckBlogPage=BlogViewPost&newspaperUserId=54e0b21f-aaba-475d-87ab-1df5075ce621&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a316fd156-fbba-46b0-b3ec-a6748f70d579&plckScript=blogScript&plckElementId=blogDest

  127. Marco Says:

    @Bart:
    Before going into lengthy discussions with Tim Curtin, please take note of the following:
    http://scienceblogs.com/deltoid/2009/03/tim_curtin_thread.php
    You may very well end up with a very long thread going completely nowhere.

  128. Heiko Gerhauser Says:

    Hi Marco,

    the spatial and temporal distribution of the forcing is different, and that might (emphasis might) make a significant difference. Í gave you the example of how ice ages are believed to get started, and there it’s also about two forcings where the global average is the same, but in summer at high latitudes there was a big difference.

    I’ve got no idea how significant this really is, but you are making a pretty strong claim, when saying something is plain physically impossible.

  129. Bart Verheggen Says:

    I think the term “efficacy” describes how a radiative forcing relates to a certain temperature change. It varies slightly for different forcings, but not very strongly. A factor of 3 difference seems too large. AR4 provides some estimates and discussion of efficacies.

    See also http://pubs.giss.nasa.gov/abstracts/2005/Hansen_etal_2.html

  130. Marco Says:

    @Heiko:
    solar input and CO2 greenhouse effect are rather strongly connected, I’d say. Aerosols is rather different, they have a very uneven distribution over the globe.

  131. Tim Curtin Says:

    Heiko Gerhauser Said (March 10, 2010 at 12:24)
    “ok, I was talking about the radiative absorption properties of water vapour and CO2, which are measured in the laboratory. And so far I have come across no-one claiming those are wrong. Do you?” No, I am referring only to the NOAA data on water vapour (“H2O” = precipitable water, in cm) and on thge atmospheric concentration of CO2, [CO2] at some hundreds of locations across the USA, vis a vis mean min, max, and Average Daytime Temperatures.

    You then said “I don’t see how you can find much from looking for correlations between CO2 or water vapour using local temperatures. Maybe I am wrong, but it seems a bit like pointless wild goose chase to me.” Why? Surely all science is about measurements. The NOAA provides ALL the data I regressed and reported on above. Do contact me via tcurtin@bigblue.net.au for more info on the data and my regressions.

    Then you kindly added “On the stations issue, however, I think you’ve got a point. Though even there, poor station coverage and quality means the potential error is large, it does not mean 1880-1910 is proven or even likely to be much warmer than indicated in Bart’s graphs above. It might have been quite cold in Africa or Siberia and we missed it.” Not likely, again if you contact me I can send you a paper addressing your point. Briefly, Hansen & Lebedeff (1987) show how adept they are at fictionalising temperature data where there is none. They even admit (Fig.4) that less than 40% of the SH had any data in 1900, yet Bart produces graphs with “global” temperatures from 1880 to 1910.

    Then Marco said March 10, 2010 at 13:39
    “Tim: You can make all the objections you want, but it doesn’t change the fact that your claim is based on absolutely no evidence [not true]. There are *several* people who used *different* procedures to check for the effect of the stations that were supposedly ‘removed’ (which was lie one: there was retrospective reporting in the early 1990s). ” They were not removed, those in Siberia mostly just ceased reporting. “Those supposedly ‘removed’ stations actually had a *higher* warming trend. In other words, removing those stations introduces a *cooling* trend.” Marco, may be , or not. Where is your data? Don’t be so shy.

  132. Tim Curtin Says:

    Bart Said March 10, 2010 at 14:09
    “Tim Curtin, The consequence of what you’re saying is basically that the global average temperature was known with less accuracy in 1900 than in 2000. True enough, and known (see eg the green error bands on this figure: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.pdf)” [I could not get there but believe you]

    “That doesn’t invalidate my earlier statement, though it means that the finite probability that there actually was a yearly temp anomaly between 1880 and 1930 that was higher than the lowest anomaly during 1980-2009 is not zero, but some (very) small number [sic]. Okay, I’ll give you that.” Thanks, but what is your evidence for the smallness? The variability in just Alaska is huge.

    “… you don’t need the exact same stations to compare yearly anomalies. You seem to have bought Anthony Watts’ line of reasoning here, which is totally off the mark [I have not]. Tempeture [sic] anomalies are highly correlated in space [please provide your evidence], and anomalies are calculated such that it is relatively insensitive to changes in which stations are used (as opposed to the claims of Watts and d’Aleo).” That is simply untrue.

    Bart, as even Tamino admits (without understanding it), anomalies are no different from actuals, because for each anomaly just divide by 100 and add 14 to get the absolute (see GISS). Thus you are absolutely wrong to imply that anomalies somehow abstract from, or are independent of, actual station data.

  133. MartinM Says:

    Utter bollocks? How about you come up with that reference first, instead of (de facto) reiterating the position that the hypothesis can never be rejected because it’s ‘physical’.

    I’m a little confused as to why you (once again) ask me to reference a claim I haven’t made, or even referred to at any point. I also haven’t made the claim that AGW shouldn’t be rejected because it’s physical; quite the opposite, I’ve argued that the model B&R derive is patently unphysical.

  134. MartinM Says:

    Speaking of utter bollocks…

    …anomalies are no different from actuals, because for each anomaly just divide by 100 and add 14 to get the absolute (see GISS). Thus you are absolutely wrong to imply that anomalies somehow abstract from, or are independent of, actual station data.

    Oddly enough, given monthly station anomalies, the procedure you recommend will reproduce absolute values only for those stations whose baseline value is 14. Since temperatures tend to change throughout the year, the number of stations which fit that description for every month would be…oh, right. Zero.

  135. Marco Says:

    @Tim:
    http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
    http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/
    http://tamino.wordpress.com/2010/03/05/global-update/

    There you go, false claims proven false.

  136. Bryson Brown Says:

    VS is welcome to correct me if I’m wrong here– I’m a philosopher of science and not trained in statistics to his level. But it seems the procedure he describes discounts trends by allowing purely statistical models that include substantial variation in temperatures on arbitrary scales and frequencies.

    The trouble with the method is that a random walk including year to year variation and longer term variations, all equally likely to move up as down) superimposed on a long-term trend would not be distinguishable from a random walk including various longer-run trends that are also pure random walks but involve low-frequency components (‘trends’, in effect) running over longer periods as well as ‘mere’ year to year variation. In the latter case, we could indeed run into a period characterized by a remarkable collection of record highs without there being any ‘real’ trend or driving force.

    This only shows that pure statistics allows for models of the temperature changes we’ve observed that don’t include any underlying trend or driving force. This isn’t surprising– in fact, it’s trivial. Pure statistics is strictly mathematical; it can model any sequence of data points you like in any way that formally fits the points. So a random walk with both short and longer-term variation of the right scales can do the job.

    What that leaves out, as some points above indicate, is the known physical dynamics of the climate system– and I’m not relying on climate models here, just basic physics. For example, OTBE, when temperatures are higher, the earth should tend to cool down, since higher temperatures produce more outgoing long wave radiation. If that doesn’t happen, we should seek a causal explanation of it– in principle this could include changes in solar input, GHG effects, high-altitude clouds or some other cause that manages to increase the retention of heat energy in the earth to counteract the higher IR emissions from the warmer surface and atmosphere. Climatologists’ work aimed at evaluating these factors is motivated by known physics, not just by statistics.

  137. Bart Says:

    Bryson Brown,

    Very thoughtful comment, and it encapsulates much of what I was trying to get at.

  138. Speaks for itself Says:

    http://tamino.wordpress.com/2010/03/11/not-a-random-walk/

  139. Bart Says:

    VS, I urge you to take a look at Tamino’s detailed reply to your assertions.

    He agrees (and shows) that a random walk could show a spurious trend, and that the ADF test could distinghuish it from a real deterministic trend, but warns that there are choices to be made in using the statistic that could influence the result in some cases: Notably of the presence of drift or the presence of an underlying trend.

    If one allows for the presence of a trend, the null hypothesis of a random walk (as determined fromt the presence of a unit root) is more often rejected than when it isn’t. This makes sense. Even then, a real random walk (also in the presence of a spurious trend which is significant according to OLS regression) can be distinghuished (see the example of his second figure). However, for the GISS temperature series the hypothesis of a random walk is clearly rejected if one allows for the presence of a trend. How did you come to the opposite conclusion of a random walk? By omitting the potential presence of a trend? If so, why?

    In Tamino’s words:

    “How did VS fail to reject it? I suspect he excluded a trend from his ADF test. He may also have played around with the number of lags allowed, until he got a result he liked.”

  140. Tim Curtin Says:

    Marco: thanks for links, and to “Speak for itself” for his/her link to Tamino’s latest on random walks, which I have to admit is rather good. But Tamino and all others who dwell on “global” temperatures have yet to address the inherent folly (see McKitrick & Essex) of saying anything about “global” temperature. What is the operational significance of that concept? exactly nil to any engineer or farmer or housewife/husband anywhere at any point in time. Wow – yesterday it was -8 oC overnight in Moscow, and 10oC here in Canberra, so do I put on an extra blanket or not? or turn that into anomalies from our respective 30 year norms for 11th March, quite likely -1oC for Moscow and -1oC for Canberra. So, Bart, – we hav new ice age, nicht waar?

    Even more looney is aggregating trends for min/max temps at various locations.

    BTW, Excel 2007 is incapable of graphing correctly series showing the projections of mean temperatures of any 2 locations from now to 2100 from actuals (1960 to 2007) – for Excel graphs the average in 2007 of 21.323 (Hilo) and -61.3139 (Pt Barrow) is
    not -20.0418 but -61.3139!!!. With Hansen in charge at Nasa, the likelihood of any new moonshot ever reaching the moon is nil!

  141. VS Says:

    Answer to Tamino’s ‘amazing’ post listed below.

    I really have no time for amateurs like these. Seriously.

    VS // March 12, 2010 at 12:04 pm | Reply

    Hahaha,

    I love it how you copy-pasted the first ten pages of an undergraduate textbook in Time Series Analysis, and ‘impressed’ everybody here with ‘astounding’ mathetmatical statistics.

    OK, lets get into the matter.

    First off, I didn’t do any ‘cherry picking’.

    In fact, YOU were the one cherry picking by using the BIC for lag selection in your ADF test equation. Any other other information criterion (and I bet you tried them all) in fact arrives to a larger lag value and subsequently fails to reject the null hypothesis. The reason I didn’t use the BIC, is because it arrives at 0 (yes, zero) lags in the test equation. I actually noted this in the comments later on, on Bart’s blog.

    Look it up, down the thread (I used the AIC, there was a typo in the first post, that I corrected later on).

    What kind of an effect does using 0 lags have? Well, residual autocorrelation in the test equation that messes up the ADF test statistic. Higher lag selections successfully eliminate any residual autocorrelation. Remember why the ADF test is ‘AUGMENTED’? Exactly because of the autocorrelation problem.

    Also, when using ANY other information criteria to arrive at your lag specification, fails to reject the null in the level series, using ANY alternative hypothesis (check that as well, master statistician). I.e. no intercept, intercept AND intercept with trend.

    Try that again, and report it, will you?

    Finally, you selectively quoted me there, I said ‘de facto bounded’ because the VARIANCE of the error term governing the random walk is FINITE. How hard is that to understand for somebody pretending to be a statistician? Simply calculating trends, in light of these test results is spurious, and you should know that (unless you were ’self taught’ or something similar).

    Look at the temeprature series over the past couple of thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…

    I seriously have no time for this kind of amateur nonsense, as well as your lashing out at economics. Economists are at least concious of their unconsciousness. Less can be said about the likes of you.

    I’m not posting here anymore. If you want to have a chat, go to Bart’s thread, and I’ll consider educating you (but given your unfounded arrogance, the chances are slim).

    Good day.

    Your comment is awaiting moderation.

    VS // March 12, 2010 at 12:09 pm | Reply

    over the past couple hundred of thousand of years.. not thousand of years…

    Your comment is awaiting moderation.

  142. Tim Curtin Says:

    Hi Bart (& VS)

    I think we need to get back to basics. Random walks are to some extent a red herring. We have claims of monotonic “global” temperature rises since 1850 (CRU) or 1880 (GISS), and evidence of monotonic increases in the atmospheric concentration of CO2 (hereafter [CO2]) since 1958. If we take the period of comparability of rising temperatures and rising [CO2] which is only since 1958, there is zero stat. sig. correlation between them anywhere on the planet unless one uses absolute values of both variables, when the elementary Durbin-Watson test shows such correlations to be spurious. If we use the IPCC’s formulation of “radiative forcing”, which asserts that it is the CUMULATIVE quantum of GHGs (which are overwhelmingly CO2), then there is no evidence anywhere that first differenced changes in temperature (which is what the IPCC asserts is the relevant dependant variable) are in any way correlated with the IPCC’s definition and measurement of radiative forcing. Random walks are irrelevant – there is NO correlation between I(0) [CO2] and I(1) temperature.

  143. VS Says:

    Guys, I really have some pressing matters to attend to, otherwise I would love to chat on the topic.

    Take a look at a nice discussion of cointegration as related to the BR paper, here:

    http://landshape.org/enm/polynomial-cointegration-rebuts-agw/

    and especially here

    http://landshape.org/enm/cointegration-summary/

    I think it ought to be interesting to anybody with an actual ‘Open Mind’.

  144. VS Says:

    (Tim, I will try to get back to your newest post ASAP, I really need to run now :)

  145. Bart Verheggen Says:

    Tim Curtin,

    Nobody expects a perfect correlation of global avg temp with CO2, due to eg weather related variability and the fact that CO2 is not the only cliamte forcing. That said, the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account). With any solar index the correlation would be much lower. And as I stated before, physically the trend must be deterministic, otherwise it is inconsistent with other observations and/or conservation of energy.

    CO2 absorbs IR and reemits it in all directions; this is basic physics, corroborated by measurements. The effect is that the planet holds more energy and warms. Or do you have an explanation as to why the earth would not warm in response to more GHG in its atmosphere? There is some basic physical understanding of the system, despite uncertainties which will always be there with geophysical science.

  146. Marco Says:

    @VS: tamino has a rebuttal for you. You can enjoy yourself with having to rebut the Phillips-Perron test.

    The whole discussion between you two is really sounding like the question a statistician once asked me: “Do you want it to be significant or not? We can choose between two methods, and one of them will make it a significant difference”.

  147. Ron Broberg Says:

    VS: Claims of the type you made here are typical of ‘climate science’. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics).

    VS: It would do your discipline well to develop a proper methodology first,…

    Nothing condescending in your opening remarks. :eye roll:

    VS: Also, I emailed Beenstock about the data he used when I first read the paper (outside of climate science, that’s quite normal), and he wrote me that all the data they use come from GISS-NASA.

    Let’s see … you take an insulting shot at climate science not providing data and in the very same breath point out that all the data used came from a climate science center. Genius.

    VS: Some of my theoretical physicist friends though, who’s nuanced judgement on these matters I sincerely trust, have endorsed it.

    In other words … “I have a friend of a friend …” Sounds like a ‘baseless claim.’

    VS: The problem is the anti-scientific attitude, based on insults and baseless claims, encouraged by agitators like Halpern (aka Eli Rabbett).

    VS: Now that’s something I don’t endorse.

    It’s not something you endourse.
    It’s something you engage in.
    I’m actually interested in the statistics you bring forth here.
    But your assumed air of superiority and snide, sniping remarks are rather poisonious.

    VS: This MUST stop. The debate is being poisoned by individuals. I understand the pressures, and the fact that many of these individuals strongly believe that they are ’saving humanity’ and must ‘take action NOW’, but this tone isn’t helping at all.

    Surely you must agree with me here.

    More than you can believe.
    I hope this mirror has been helpful.

  148. VS Says:

    Marco,

    You people (yes, you Marco, Tamino and other Halpern-like types) are starting to become quite comical with your Realclimatesque ‘rebuttals’. If you guys knew anything about econometric/statistical inference methodology, you would know how to draw your conclusions…

    …and I would bother to respond to the PP test if doing so would actually serve a purpose. But it doesn’t. And like I stated earlier, I don’t have time/energy for a spit-fight, especially with somebody, like Tamino, who doesn’t even understand how autocorrelation pollutes the ADF test (see also subsequent post by Alex, where he used several different enthropy measures to determine lag length, where he also failed to actually address the issues posted).

    As for that BIC, I clearly rectified that here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1286

    Just for the record, since some of the posters on Tamino’s blog seem to be confused. The BIC is also called the SIC. See here:

    http://en.wikipedia.org/wiki/Bayesian_information_criterion

    Tamino’s arrogance is startling. This ‘blogosphere’ apparently got to his head.

    The I(1) property has been firmly established in the literature, by both proponents (Kaufmann etc) and skeptics (Beenstock etc.). I’m not going to explain that to a C-grade statistician acting like he’s Heckman/Granger.

    You, however, are free to choose your priest.

    Hi Tim,

    You’re absolutely right that the random walk thing is a red herring. Apparently however, the audience knows so little about statistics that it actually works pretty well as a distraction.

    I would still be very careful about regressing different order integrated processes just like that. Just like any found relationship would be invalid, so is any rejection :) Furthermore, by incorrectly applying the techniques, Kaufmann et al (2006) actually ‘confirm’ AGWH.

    Still, I maintain that Kaufmann has the marks of a good statistician, and any honest mistake is just that, a honest mistake. I’m pretty sure he would admit that too, and he seems to know what he’s talking about.

    People like Tamino however, who don’t even want to understand what they are doing wrong are not worth my time.

    Hi Ron,

    Read Tamino’s style, and read my posts in this 40 page discussion here. You will note a difference in tone. Besides, I argumented every single position in detail.

    Take the time, if you will.

  149. VS Says:

    PS.

    Ron, you really quoted me out of context there.

    (1) I stated that I am not passing judgement on the GT paper, but I also clearly stated where I got my ‘leaning’ from. Take a closer look at the posts you are quoting from.

    (2) Methodological disagreement is not an anti-scientific attitude. Calling people ‘bonkers’ and ‘idiots’ out of the blue, and stating that they are engaged in a ‘circle jerk’ and such things, is.

    (3) My post replying to Halpern was pretty harsh, but I believe anybody reading his blog entry, and understanding the lack of understanding of statistics he has, would conclude that he clearly asked for it.

    If my tone was insulting, I’m sorry, but the heat of the debate sometimes takes over. In general, I think I was being quite forthcoming.

    As for my remarks towards Tamino and Halpern: Again, I’m pretty sure both of them asked for it.

  150. Ron Broberg Says:

    I appreciate that.
    Thank you.

    I have no real qualm with people engaging in ‘string defense.’ But own up to it.

    It’s the “what’s wrong with you people, I’d never act like that” proposition that I object to. Yes, you would and yes you do.

    And I think I ‘get it.’ I think you come from a clique where things like making put downs about data availability get a chuckle. And that’s OK as long as don’t then turn around and try to play the ‘IZ A VICTUM’ card.

    Meanwhile, I learned more about the method being used from Tamino’s post then I learned here. That’s because Tamino was aiming his post at an audience with less statistical knowledge then you possess. But I appreciate you bringing this method forward and hope that you two can continue the discussion – even if by proxy. (I don’t hold it against you that you are cautious about continuing the discussion on his blog).

    Do you have any comments on his use of Phillips-Perron?

  151. Ron Broberg Says:

    string=strong

  152. VS Says:

    Hi Ron,

    Actually that ‘IZ A VICTUM’ comment also evoked a chuckle over here ;) You have somewhat of a point there.. at the moment however, I was completely shocked by how rudely dismissive Tamino was… (see quote)

    I’ll try to get back to the Phillips-Perron test soon, but I actually had no time today (or in the coming few days) for even the postings I’m making now.

    In short however, it’s a fringe test, with a higher rejection tendency. There are other tests as well, that still confirm I(1). I’ll argument this in detail later though.

    In the meantime, do note that Kaufmann et al (2006) would ‘love’ to have been able to conclude I(0) as that clearly fits their hypothesis (I(2) would have been great too for them, hehe, it’s the I(1) that’s really bugging :).

    But they didn’t, because, even though their analysis is somewhat ideologically tainted, they are still proper statisticians (and they have my respect for that).

    However, I’ll try to get on it ASAP. Honest and good natured interest is always welcome, and deserves to be addressed :)

  153. dhogaza Says:

    Reality:

    1. Satellite measurements of infrared emissions from the top of the atmosphere are consistent with physics-based expectations of CO2 absorption of long-wave radiation.

    2. The primary expected positive feedback is water vapor. Model predictions have been confirmed with detailed observations using the AIRS sensor on NASA’s Aqua satellite. The water vapor in the troposphere is responding as expected to changes in temperature.

    3. Downwelling infrared radiation has increased in a way consistent with the underlying understanding within physics.

    So, one of two things are true:

    1. A whole bunch of physics, backed up by observations, is wrong.

    2. A non-specialist who has conveniently found that a particularly choice of statistical test suggested that observed temp trends are simply a random walk with no established trend perhaps made a boo-boo.

    Occam’s razor makes it easy to choose between the two.

  154. VS Says:

    Hi dhogaza

    Thank you for your contribution. However, you would be well advised to read up on the contents of this thread.

    First of all, the AGWH is not a fundamental model, it is phenomenological one. The poitns (e.g. 1, 2, 3) you call ‘reality’, imply some correlation, but definitely do not constitute sufficient proof (in the eyes of a lot of people). Also in the natural sciences. Consequently, disproving the AGWH, does not disprove physics. Your claim is too strong.

    Second, you would also be well advised to look at the source of those ‘convenient’ statistical techniques, before branding them as such.

    Any discussion on the topic must start with the acknowledgement of the possibility that the hypothesis in question is reject-able (i.e. might not be true) as well as the opposite. Holding your current position, prohibits debate.

  155. Marco Says:

    VS:
    Gee, I point to the PP test, and you say it doesn’t serve a purpose to comment on that, Ron Broberg does the same and you say “I’ll come back to that”.

    With regards to arrogance: you come barging in, claim B&R refute AGW, and then do all kinds of grandstanding and call others arrogant when they point to various issues with the analysis. Talking about arrogance!

  156. Adrian Burd Says:

    VS,

    Just a small point, but I think you are being your own worst enemy here. Your posts are far less intelligible than those of your bete noir, at least to physicists such as myself.

    Now, this is no real excuse, but like most (all?) of us, I have limited time to spend learning about these things. I read Tamino’s post and can immediately see what he is saying. His language is clear and straight forward, his terms are defined, and he spends time to give clear examples of what he is talking about.

    On the other hand, your posts are opaque, hard to follow and jargon-filled. This is not to say that they are wrong, just very difficult to get to grips with. You others with phrases such as “so-and-so does not understand simple autocorrelation” and expect your readers to know what you’re thinking. I for one, do not.

    So, with limited time on the part of the reader, guess who wins out?

    As a physicist who works in an interdisciplinary field, I come across the value of good communication on a daily basis. Not only do methodologies differ between subject areas, but frequently similar terms and concepts mean subtly (sometimes grossly) different things. Determining what those are and translating what you mean into terms that can be easily understood by others is a key to having ones ideas successfully accepted in another field.

    I would contend that Tamino is successfully able to do this. So far, I do not see that you have, at least for me.

    I for one would dearly love to understand the subtleties of the different discussions being presented here and on Tamino’s blog. However, given my limited time and your opaque posts (as well as my dim brain), it will have to wait till the next lengthy plane ride – if ever.

    In passing, I will note that your very first post in this thread contained some quite harsh and dismissive language towards Bart in particular, and climate scientists in general. Such a tactic almost always will get up the hackles of those being so summarily dismissed.

    As for Bart, like Gavin over on RealClimate, I think he has been the paragon of politeness – kudos to him.

    Just my 2d worth.

    Adrian

  157. jfr117 Says:

    continue to ‘barge in’ VS, please. i enjoy seeing actual debate, rather than the typical zombie parroting that appear on these blogs. i have learned a lot seeing you go head to head with tamino. kudos for pushing the comfort level for everybody! although its like swallowing bad medicine, i vote for further interactions with tamino. he has built his own empire and it is good to see the self-imposed king challenged!

  158. dhogaza Says:

    Any discussion on the topic must start with the acknowledgement of the possibility that the hypothesis in question is reject-able (i.e. might not be true) as well as the opposite. Holding your current position, prohibits debate.

    If it’s not physically plausible, it ain’t going to work. Reminds me a bit of the fable of the mathematical proof that a bumblebee can’t fly.

    You’re an economist. The odds of your overturning a large body of well-established physics is statistically indistinguishable from nil.

    It’s DK all the way down, boys and girls.

  159. Scott Mandia Says:

    LOL, and I thought this wonderful thread had petered out!

    As for Bart, like Gavin over on RealClimate, I think he has been the paragon of politeness – kudos to him. Agreed!

    I have learned much from this thread and the subsequent one at OpenMind. I also agree that Tamino makes the stats easier to understand than VS.

    VS, what would you need to “see” in order to change your position about the random walk? Play your own devil’s advocate.

    I am with Bart, Marco, Arthur Smith, dhogaza and others who view this in the physical sense. Sorry for the simplicity but I see this as follows:

    The established GHG physics (Arthur Smith’s paper and not G-T’s paper) tells us that we should be warming. We are observing warming with multiple lines of evidence pointing toward increases in GHGs.

    Models are started back in time and then are run forward. These models produce the correct warming only when using Arthur Smith’s physics. Without GHG forcing, we cannot get this warming – in fact we should be cooling. These same models show that we are headed for climate change that will be faster than we can adapt. Uh oh.

    You come along and tell us that you think the paleo record is very suspicious due to the stats used to create them. BTW, are borehole T stats different? They show a “hockey stick”.

    Then you claim that the models are probably not very good. Because we cannot create another Earth to test GHG forcing, these models are the only method we have for “experimentation”. Throw them out?

    So we throw out physics, observations, and also the only way to test (models) because you have a stats method that shows little to no correlation between CO2 and T?

    Again, VS, what would you need to “see” in order to change your position about the random walk? Play your own devil’s advocate. I am truly curious what would change your mind.

    Here is what would change my mind but, alas, I cannot wait for the unlikely: the next 30 years show a decreasing trend in global temperature. If that happens, I will jump ship, so to speak.

    Sorry if this is a ramble, I am tired and hungry. :)

  160. Jim Eager Says:

    Actual Debate?

    Over at Tamino’s place VS wrote: “Look at the temeprature series over the past couple [hundred] of thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…”

    The man appears to be utterly ignorant of the Milankovitch Cycles, the very real physical process that drives the trend that he can not see.

    Talk about the arrogance of ignorance.

  161. Tim Curtin Says:

    Re dhogaza (12 March 18.17), you may be right in your (1), but in your (2) how do YOU know that the water vapor in the troposphere is responding to changes in temperature? Regression analysis of the climate data at Pt Barrow for July from 1960 to 2006 shows that it is water vapor that best explains the changes in mean minimum temperature there over 46 years (t=7.26, p=6.98E-09), whilst 1st difference changes in RF via [CO2] have only a negative but statistically insignificant effect, and although cumulative growth of radiative forcing from [CO2](IPCC definition) has a very slight positive impact it is utterly insignificant (t=0.09, p=.9). Remember what Einstein said about hypothesis testing? Then reread the contributions by VS here. The argument is perhaps not so much about the physics, but mainly about the significance and direction of claimed effects.
    However there is always the possibility of reverse causality, so perhaps it is as you claim that changes in temperature change the level of tropospheric water vapor. The news is not good, as it appears that changes in mean minimum, not mean maximum, temperatures, explain changes in water vapor. What is your physics’ explanation for that? Changes in mean maximum and average daylight temperature have no significant impacts on changes in water vapor, while as often radiative forcing only has a negative, but insignificant, effect.
    Moreover the significance levels are much higher for the water vapor effect on mean minimum temperature than vice versa. Any questions?

  162. dhogaza Says:

    how do YOU know that the water vapor in the troposphere is responding to changes in temperature?

    And I give you Tim Curtin … chuckles all around, boys and girls.

    Why do clouds form?

  163. Jim Eager Says:

    Why, cosmic rays, don’t ya know, dhog.

    The stupid, it burns.

  164. dhogaza Says:

    That’s a funny response, which I thoroughly enjoyed, Jim :)

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work).

  165. Rattus Norvegicus Says:

    I’ve been doing some not terribly deep thinking about the implications of the “random walk” theory of climate change.

    Now, as VS asserts, any changes in past global temperature and the current change in global temperature are due to random walks, then given a suitable length of time shouldn’t some rather improbable realizations of this random walk have happened? Given 4.5 billion years, which is a very long period of time, shouldn’t the Earth have had a realization of the random walk which leads to Venus like conditions? Now granted, my knowledge of statistics is related only to the evaluation of the odds of filling a poker hand vs. the odds being offered by the pot, but I was pretty good at that, and my gut can smell a bad hand when I see it.

    Another thing that bothers me about VS’s arguments is that climate models are “phenomenologically based”. I take this to mean that they are statistical models. He based this “argument” on the parametrizations used for subgridscale processes which are used. Now in the last paper I read on GISS Model E, this was around 6 parameters, all of which were based on experimental or observational evidence. The vast majority of the model is based on physics and this holds true for all of the climate models in use. Some are better than others, but the fact remains that they are based on the physics of climate processes and not statistical relationships.

    So what do you say, BS, oops, VS?

  166. Al Tekhasski Says:

    Alan wrote on March 10, 2010 at 11:42

    “The assumption is invariably that climate scientists aren’t expert in a specific field and should ‘listen to the professionals’ because they are making goofy mistakes and their conclusions are suspect (and maybe dangerous). Do climate scientists all have the same, specific technical expertise?

    I don’t get it. I work in business – not science. And on major projects we pull together a team with experts in specific disciplines … no one subject matter expert is sufficient. It must be the same with climate science research, surely!?

    Research and analysis projects would involve a team. I imagine there is consultation with ‘design of experiments’ experts; instrumentation experts; modelling experts; model coders … and, dare I say it, time series analysis experts.”

    You are absolutely correct, the climate change is not science, it is a project, an applied project. But you are imagining too much. As you said, as a major complex project, it is a conglomerate of various disciplines. Unfortunately, the physical conditions of this project usually reside outside the established margins of respective precise sciences. It creates quite a few “inconveniences” for true experts in corresponding disciplines.

    For example, all data fields in climatology are undersampled by a factor of 100 at least, so honest experts in data analysis would not touch them with a 30-feet pole and attest for accuracy of conclusions. If we take the computational aspects of the project, it requires a million-fold bigger computing capabilities, so experts in Computational Fluid Dynamics would simply walk away. If we look at attempted parameterization of turbulent processes in atmosphere, predictive power of primitive theoretical models is nil, while experimental validation of parameterizations would require about a quarter-million years to match the quality of parameterizations in industrial CFL. If we look at attempted modeling with ensembles of climate trajectories – same thing, they have no clue about topological properties of the system and fractal-like complexity of its state space. If we look at carbon cycle and ocean surface-air exchange, it is again apparent that information about highly-variable instant fields of relevant physical quantities (concentration gradient, gas exchange coefficient, wind) is completely missing, which prevents any estimation of CO2 fluxes from being meaningful. Etc, etc.

    In short, the climate change problem is a conglomerate of disciplines where each one must be applied outside its conventional experimentally verified range, where conclusions become highly speculative and uncertain. Only ignorant or intellectually dishonest individuals are embarking on this global problem, where they have to “cut corners” and accept many things as “good enough”, or even “hide the decline”. So, the problem is approached not as a business project, it runs mostly by dilettantes, not experts. Maybe because there is no problem at all, just smoke and mirrors by some ambitious individuals who were historically unfortunate not to own oil fields.

    [Reply: Not only do you seem to come from an entirely implausible conspiracy theory angle, you’re trying to paint a whole scientific field as incompetent or even dishonest. No more of that. BV]

  167. Tim Curtin Says:

    d…hog, for that is what you are, with your belief (at Tamino’s) that temperature anomalies are not related to actual temperatures. What I asked for was your regressions, especially the recursive ones that show all the feedback processes, such as from temperature to water vapor to temperature, and how CO2 initiates this circularity.

    And of course anomalies at any given place depend on the average of the actual temperatures for that place’s reference period, but then Hansen (1987) averages anomalies over and over again, so it is questionable whether GISS means anything at all. The main purpose of the anomalies when multiplied up by 100 as by GISS is to make trivial temperature changes, like 0.7oC since 1900, seem incredibly ominous.

    As Al implies, GISS is run mostly by dilettantes, like the Hog – but I would like to see him in action with his raptors, sounds great.

  168. VS Says:

    Hi guys,

    For everybody doubting the ‘physical reality’ of the random walk model, and for all those that are conveniently ‘ignoring’ my boundedness argument, here you have it from a proper physicist (he just responded to Tamino’s ‘response’ to my comments). I hope he is a bit better in communicating it than me:

    http://motls.blogspot.com/2010/03/tamino-vs-random-walk.html

    And since everybody is so big on ‘authority’ in these spheres (however irrelevant that is, but OK), here’s his profile:

    http://en.wikipedia.org/wiki/Lubo%C5%A1_Motl

    I encourage everybody who’s planning on writing something highly intellectual like ‘You suck! Economics sucks! Physics rocks! Climate is not RANDOM! Oh, and VS, if you didn’t hear it the first time around, you SUCK!’ to read this blog entry before posting.

    ———————————

    Also, any critical reader would have already spotted this comment by Alex on Tamino’s blog. Also definitely worth a look (Tamino ‘dances’ around the matter in his follow-up, instead of replying properly), as Alex (who should really quit posting there and come over here :) clearly showed that Tamino engaged in some extraordinary cherry picking when presenting his results.

    In particular, this part:

    “I downloaded the data myself (including the mean for 2009) and performed several DF-tests with a drift and a linear trend. The results really depend on the selection criteria one uses.The results I get are:

    Lag selection Ho: unit root Trend variable p-value trend variable
    (p-value)
    AIC 0.4301 0.148415 0.0111
    BIC 0.0001 0.239066 0.0000
    Hannan-Quinn 0.4301 0.148415 0.0111
    Modified Akaike 0.9246 0.119943 0.0928
    Mod. Schwartz 0.8237 0.124735 0.0559
    Modified H-Q 0.9246 0.119943 0.0928

    As one can see, only when the BIC-selection criteria is applied, is the null hypothesis of a unit root rejected.However, looking at the residuals of this test equation, there clearly is autocorrelation present, which makes this test invalid (as explained in the above article). When all the other selection criteria are used, no autocorrelation seems to be present in the residuals, and the null hypothesis is not rejected. Now, I don’t believe that someone can write such a good article on unit root testing and subsequently fail to look at different selection criteria. Seems like the author was cherrypicking himself! :)”

    I think that quote speaks for itself.

    ———————————

    As for that PP test, use some common sense here, please. For starters, take a look at the literature on the topic (in my first post):

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    Indeed, Kaufmann and Stern (2000), two AGWH proponents also find, using the ADF and KPSS tests, an I(1) process, and using the PP and SP tests, an I(0) process. However, in Kaufmann et al (2006) they treat the variable GLOBL (global mean temperature) as an I(1) process.

    Guess why they came to that conclusion, in light of all the tests?

    ———————————

    Finally, I saw some comments on Tamino’s blog about me ‘writing off AGWH’ in a ‘couple of paragraphs’, while Tamino delivered ‘2000 words’ with I don’t know how many formula’s (that truly resemble an undergraduate TSA textbook. If you want to see TSA as you are supposed to see it, take a look at a standard graduate textbook like James D. Hamilton’s, Time Series Analysis, Princeton University Press)

    In any case, I just did a word count on how much I posted here in this thread: over 13,500 words in total.

    Now I ‘understand’ that various paladins, who just got hotlinked to this page through Tamino’s blog, want to start fresh ‘fights’. However, do take the time to read the entire argument first before barging in.

  169. Tim Curtin Says:

    Well said, VS!

    Meantime, here’s my comment on Barts’ post “Nobody [sic] expects a perfect correlation of global avg temp with CO2, due to eg weather related variability and the fact that CO2 is not the only climate forcing. That said, the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account). With any solar index the correlation would be much lower [!!!!]. And as I stated before, physically the trend must be deterministic, otherwise it is inconsistent with other observations and/or conservation of energy”.

    But (1) the IPCC, hardly “nobody”, claims exactly that, “a perfect [better than 90%] correlation of global avg temp with CO2”.

    And (2) Bart claims “the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account)”. This shows that Bart has no clue about the I(0) I(1), and I(2) factors. His correlations are all bogus, in terms of just the basic Durbin-Watson statistic. What Bart has to show us is his correlations between ln(CO2) and temperatures anywhere on the planet. Take care, I now have a large database showing that not to be the case anywhere in USA, Australia, or UK.

  170. Alan Says:

    Al Tekhasski at https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1442

    I did not say and I did not infer that “climate change is not science”. I most certainly think it is. I don’t think it’s a single discipline … it’s more like a field (or an atmosphere!). Of course, research ‘projects’ is the modus operandi but I wouldn’t class climate change as just an ‘applied project’.

    Other aspects of your post are a puzzle to me … I am not sure what you’re driving at. My guess is that you are saying that, if we don’t have the computer power to run models that simulate eg Navier-Stokes at the micro level, then it is just plain wrong to run models at the (relatively) macro level (like climate models) and to rely on their output.

    Is that what are really saying?

    If so, then would you drive across a bridge which was designed before finite element computer analysis became a usable practice for engineers?

  171. Alan Says:
  172. Alan Says:

    There we go … don’t put a comment between “”!

    What I posted was a plea for a “Help” section to figure out how to format my comments better! And please don’t tell me I have to learn html.

    Apologies for causing post alerts to be sent to you all.

  173. Marco Says:

    Good grief, VS, now you are referring to Lubos Motl?

    Lubos is a theoretical physicist who is known for one reason, and one reason only: he’s a contrarian. A really foul-mouthed contrarian at that. You complained about the language previously, and now you point to Lubos as a credible source. Quite the contradiction, VS!

  174. VS Says:

    Marco,

    I’ve been looking over your comments in this thread, and I can conclude but one thing: you are not adding anything to the discussion, exemplified by your use of terms like ‘contrarian’.

    He laid out a physical argument, YOU OF ALL PEOPLE was asking for.

    Join this discussion on an adult level, or bugger off. Seriously, you are starting to annoy.

  175. VS Says:

    PS. As far as I gathered Lubos is more well known for having been an assistant professor in theoretical physics at Harvard.

  176. Scott Mandia Says:

    Lubos Motl summarizes:

    the AGW cultists want to deny any history of the climate before the year 1850 or so.

    Seriously? He destroys all his credbility with this statement alone.

  177. dhogaza Says:

    PS. As far as I gathered Lubos is more well known for having been an assistant professor in theoretical physics at Harvard.

    Actually, he’s even better known for being an *ex-*assistant professor at Harvard.

  178. dhogaza Says:

    Now I ‘understand’ that various paladins, who just got hotlinked to this page through Tamino’s blog, want to start fresh ‘fights’. However, do take the time to read the entire argument first before barging in.

    The argument is unphysical. Either much of physics is wrong, or one economist is right that his statistical test proves that CO2 doesn’t absorb LW IR (observed), higher air temps over the ocean doesn’t lead to increased evaporation, water vapor isn’t a GHG, etc etc.

    Tamino’s published a fair amount, he’s a professional statistician who does time series analysis for a living. So here we have one professional, one self-claimed expert, disagreeing, but one’s findings are consistent with known physics, the other, not.

    Again, easy choices.

    Motl’s a string theorist, not a trained statistician, BTW.

  179. VS Says:

    Oh Scott, please!

    Grant Foster (aka Tamino) doesn’t lose any credibility when he is obviously:

    -cherrypicking information criteria to fit his hypothesis (1 out of 8! And diagnostics show it isn’t valid in that one case!)
    -ignoring all the established findings in the literature (e.g. Kaufmann)
    -misstating the relationship between unit roots and AR processes on his previous blog entry (to AR(1) or not to…)
    -vomiting over an entire field of science (econometrics/economics)

    Josh Halpern (aka Eli Rabbett) doesn’t lose any credibility when he:

    -fails to even get the basic definition of an integrated series right (he is still maintaining it refers to how they ‘increase’, so that means he didn’t even bother consult wikip.)
    -he proceeds to foulmouth both Beenstock and Reigenwertz personally, as well as their discipline, for being clueless

    ..but when Motl uses harsh language in an otherwise technical blog entry, a type of post that you too demanded, you simply ignore his concerns and get ‘offended’ by one unrelated sentence.

    I was actually looking forward to your comment, because I didn’t classify you as a ‘soldier’, like e.g. Marco.

  180. VS Says:

    dhogaza,

    So a qualified statistician can’t say anything about it because he’s no physicist? And a physicist can’t say anything about it because he’s no statistician?

    How about you respond to arguments, or not respond at all?

    Thank you for ‘polluting’ the discussion.

  181. VS Says:

    Ahem, re post to Scott

    ‘1 out of 6 information criteria’ :)

  182. dhogaza Says:

    It is rude to post people’s real names when they prefer to post anonymously, even if it’s easy to find out who they are. That puts you on my shit list (along with Motl).

    Motl makes no real point in his post. The fact that the climate system doesn’t exhibit white noise is a given. As far as his summing up goes:

    More generally, I want to emphasize that a working predictive theory of the climate can never “neglect” the “natural variability”.

    He implies that mainstream climate science does, which he knows is not true.

    Strawman.

    Even if there were an important man-made trend, it’s clear that the climate variability is important, too. It’s damn important to know how large it is and how it depends on the timescale (and distances), so that we also know the typical timescale where the “man-made linear trend” could start to beat the natural variability.

    True enough. The rule-of-thumb from WMO has been, for many decades, a 30 year time frame for climate changes on a non-geological timescale.

    We surely know that the timescale from the previous sentence is longer than 15 years because it’s pretty easy for the natural factors to beat the man-made effects for 15 years because there’s been no statistically significant warming since 1995.

    Which is totally consistent with the mainstream view of looking over 30 years, rather than 15.

    But it may be much longer.

    Motl knows this is a false statement, because he recently said that the reason to ask Jones whether or not there’s significant warming from 1995 to present was because there *is* significant warming from 1994 to present.

    So even if the man-made trend existed and were large, it’s completely self-evident that most of the research concerning “climate change” would have to focus on the variations which are obviously of natural origin

    Motl knows that much of climate science *is* focused on understanding natural variations. It’s another strawman.

    BTW, VS, strawman arguments are a form of dishonesty, something I’m sure you don’t support.

    You can learn exactly nothing if you deny all of this and you only focus on some hypothetical, politically motivated term that only existed for 100-150 years. The Earth’s long history doesn’t deserve to be denied in this way.

    Again, strawman. He’s lying out his ass here by implying that climate science denies or doesn’t study the earth’s long history.

    The last two paragraphs, which I won’t post here, are equally disconnected from reality.

    I’m sure, though, that VS is convinced that climate science ignores natural variability, the noise structure found in climate, forcings other than CO2, paleoclimate, etc.

    Because Motl has said so.

    As noted above, VS seems unaware of Milankovich cycles. My guess is he knows next to nothing about the physical aspects of climate, and therefore doesn’t have the background to understand that Motl has built an Army of strawmen to demolish.

    Motl, of course, depends on such ignorance. He hopes that if implies that climate science ignores climate change in the past that have happened over geological timescales, that people who are unaware of how comprehensive climate science is will fall for his lie.

    As VS has apparently done …

  183. dhogaza Says:

    So a qualified statistician can’t say anything about it because he’s no physicist?

    You have nothing to say because you know nothing of the physical science.

  184. Adrian Burd Says:

    VS,

    I’m sorry, but you still continue to do yourself no favors here (or elsewhere). Yes, maybe I should have the time to read in detail the papers you refer to, but I don’t (I have my own research to deal with, as well administration, teaching etc).

    So I ask you once again, please explain your points simply.

    As for the physics articles you have referred to, well, the first two were by people whose arguments were well demolished. As for Lubos, to my knowledge he as little to no standing amongst the physics community having gone off the deep end long ago. If you searched beyond the first two or three links that Google turned up, you would have discovered that for yourself. You will also discover that he “resigned” his position as an assistant professor at Harvard some time ago, and did so under something of a cloud.

    So, since you are unable or unwilling to present your arguments in a way that is readily understandable to someone like me (call me stupid if you like), I will side with the physics community on this one.

    Adrian

  185. Enough Already Says:

    I would urge the moderator of this blog to cut off the troll and return this normally good blog to the reality based universe

  186. Scott A. Mandia Says:

    If this is a war, then I would call myself a soldier and I am pretty confident that my side not only has superior numbers of troops but much better weapons. :)

    Of course, one must still know thine enemy.

    Anyway, my comment about Motl is that it is absurd to think that climate scientists (and all others in fields related) do not consider data before 1850. If that were true then why all the hockey stick fuss?

    Is it true that the main arguments between the randon walk hypothesis and the physical data arguments is related to length of time?

    If so, then we have a problem because it would be like comparing deaths due to being hit by horses in the past 2,000 years vs the deaths from being hit by cars in the past 2,000 years. No?

  187. Paul Tonita Says:

    So, to sum it all up, Thermodynamics Schmermodynamics. It’s all random! Maybe Toyota can use this random walk bit to explain their faulty accelerators. They’ll be very pleased to hear this!

  188. Marco Says:

    @VS:
    A physicist is not a physicist. That is, a theoretical physicist does not automatically understand thermodynamics. In fact, we have several examples that some simply don’t (Gerlich and Tscheuschner to start out with). Motl has made a career about being a contrarian, not just on climate change. Just check his site and look at his comments about string theory. And yes, he once was a promising scientist. Look at what kind of person this once promising scientist is here:
    http://backreaction.blogspot.com/2007/08/lubo-motl.html
    Be sure to read the word document with some examples of Motl’s way of arguing. And then you complain about me.

  189. jfr117 Says:

    from motl’s blog: “When we say that a function, “f(x)”, resembles “white noise”, it means that its values at different values of “x” are random and independent from each other. Such functions are inevitably completely discontinuous. If we use them as a model of temperatures, the temperature in the next year has nothing to do with the temperature of the previous year. It can suddenly jump to the temperatures seen in 1650.”

    i had never thought about the assumptions inherent in the variability. but this assumption (trend plus white noise) does not make phsyical sense for temps. what are the similar assumptions for red and pink noise?

  190. dhogaza Says:

    i had never thought about the assumptions inherent in the variability. but this assumption (trend plus white noise) does not make phsyical sense for temps.

    No, there’s nothing controversial about that part of Motl’s post, either.

    what are the similar assumptions for red and pink noise?</blockquote

    Tamino's posted a bunch of stuff over time regarding noise and temperature.

    Here's one that touches on it, but there are more detailed ones over there.

    http://tamino.wordpress.com/2007/09/21/cheaper-by-the-decade/

  191. jfr117 Says:

    @ Scott

    i am guessing but i think motl was referring to the smoothed nature of recent historical temp. anomalies that have made events such as the mwp and lia go away…and the recent warming look very large in comparison.

  192. Bart Says:

    VS,

    I have yet to see your reply to my new post, where I outline that it is evident on physical grounds that the increase in global avg temp is not merely random: A random increase would cause a negative energy imbalance or the extra energy would have to come from another segment of the climate system (eg the ocean, cryosphere, etc). Neither is the case: There is actually a positive energy imbalance and the other reservoirs are actually also accumulating energy.

    How do you reconcile this with the hypothesis of a random walk?

    Moreover, there is a known positive radiative forcing. You’d have to explain how it’s possible that an enhanced concentration of GHG in the atmosphere would *not* lead to warming; it contradicts what we think we know about the physics.

  193. Al Tekhasski Says:

    All energy comes from Sun, it just passes through the climate system. Therefore, an energy imbalance could be anything, and air parameters can and will walk up or down with changes in “effective atmospheric thermal resistance” and interplays between local (read: oceans, soils, ice) heat “capacitors”/reservoirs. For example, temperature in a pot on a slow stove fluctuates like turbulent hell while being reasonably bounded, all by the same physics.

    The importance of “positive” energy imbalance due to CO2 “radiative forcing” is highly questionable, because, for example, the Earthshine experiment has detected a long-term drift in albedo, from 0.319 in 1995 to 0.297 in 1999, a 2.2% change. This would be an equivalent of CO2 quadrupling. And their reconstruction of albedo from ISCCP cloud database shows a staggering 10% anomaly (or ~3% global) from 1986 to 1998.

    The “known positive” radiative forcing is a result of theoretical estimates based on tropical (and near-tropical) abstract models of atmosphere with dubious cloud parametrization, the result contrarians cannot reproduce without full original information. Even then, all this alleged forcing is just 1/20th of the energy imbalance from recorded drastic albedo changes, which have no explanations from climatology, and are ignored for simplicity. So, nothing contradicts anything in this dirty application with no apparent small parameters, such that it not possible to cleanly apply classic methods of physics.

  194. Adrian Burd Says:

    Al,

    Please go and read

    http://www.skepticalscience.com/earth-albedo-effect.htm

    To summarize what is there:

    The changes in albedo inferred from earthshine do not entirely agree with those from satellite measurements. The latter are whole planet measurements whereas the earthshine measurements are only in the 0.4-0.7 micron wavelength band. Satellites show little to no trend in albedo from year 2000 onwards, whereas earthshine shows a increase between 1999-2003, and little to no trend since then.

    Lastly, do you think for a minute that climate scientists are sufficiently stupid and ignorant (or duplicitous) to not include albedo? Changes in land use, atmospheric aerosols, clouds etc are taken into account in calculating the radiative forcings. This is all abundantly clear in Chapter 2 of the WG-1 AR-4 IPCC report.

    I sometimes think that otherwise intelligent people go searching for the slightest thing that might bolster their claims, come across some site such as WUWT, and repeat what they’ve seen. Instead, they should spend time reading the literature.

    I have changed fields (from theoretical physics to marine science) and I know that it takes a long time, lots of effort and hard work to become knowledgeable and be able to contribute significantly to a second field. People seem to forget this, even though they have presumably put in the time and work to become expert in their own discipline.

    Now, I for one would love to learn more about cointegration, unit tests etc so that I can assess arguments being presented here, as well as perhaps make use of them in my own work. So again, VS, perhaps you can enlighten us all as to how precisely these things work and why you think there is a problem.

    Adrian

  195. Pat Cassen Says:

    In addition to Adrian Burd’s recommendation, Al should read the comprehensive review by Wild: “Global dimming and brightening: A review”
    http://www.leif.org/EOS/2008JD011470.pdf
    “Recent brightening cannot supersede the greenhouse effect as the main cause of global warming, since land surface temperatures overall increased by 0.8°C from 1960 to 2000, even though solar brightening did not fully outweigh prior dimming within this period…”
    The story is nowhere near as simplistic as Al would have it.

  196. Al Tekhasski Says:

    Adrian Burd wrote: “The changes in albedo inferred from earthshine do not entirely agree with those from satellite measurements. The latter are whole planet measurements whereas the earthshine measurements are only in the 0.4-0.7 micron wavelength band. Satellites show little to no trend in albedo from year 2000 onwards, whereas earthshine shows a increase between 1999-2003, and little to no trend since then.”

    I think you are confused. It is Earthshine that covers (a) entire Earth at once, and (b) uses the right light range. Satellites, in contrast, require a lot of effort in interpretation of received brightness. They use either swaths of limited view fields, or need orbit correction, and calibration target correction, and satellite body temperature correction, and diurnal correction, and inverse scattering correction (weighting function), and who knows which else correction. The results have to be reconstructed from many pieces of incoherent and noisy data. It is no better than surface garbage, and is a subject of wild wishful interpretations.

    Why do I think that AGW climatologists are negligent about albedo? Because (a) they (at least RC advocates) frequently state that albedo is constant and “well known”, and (b) the albedo effect is nearly two orders of magnitude bigger than the entire alleged doubling in CO2, yet no historical data are available from the distant past, and cloud cover os frequently a fuge parameter that allows models to fit known surface data. Yet they believe that they can calibrate their models without the most important data about albedo.

    Adrian also wrote: “I have changed fields (from theoretical physics to marine science) and I know that it takes a long time, lots of effort and hard work to become knowledgeable and be able to contribute significantly to a second field.”

    Sorry to ask, but why would someone “change” theoretical physics to marine stuff? Do they pay more, or something else?

  197. Al Tekhasski Says:

    Pat Cassen wrote: “Al should read the comprehensive review”

    I appreciate this pointer to real (typical) climatology. It is impressive amount of effort, one would only guess how much did it cost to taxpayers.

    First, let me remark that global SSR is not the same as global albedo. Second, I certainly agree that the story is not that simplistic. Unfortunately, I can easily point out one obvious source of discrepancies and incoherency. The review mentions that “To date more than 30 anchor sites in different climate regimes provide data at high temporal resolution (minute data).”

    Now consider this. The Weather pattern has a characteristic spatial variability of the order of 50km. Therefore, in accord with Nyquist sampling theorem, one needs a spatial sampling grid of 25 x 25 km in order to get representative statistical properties of the climate field. Earth surface is 5*10^8 km2, therefore the global climate data acquisition grid should have about 800,000 sensors equally spaced around the globe. The SSR nework has 30. Give me a break.

  198. Tim Curtin Says:

    dhogaza: Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007 he proceeded to plagiarise (if he was not one of the authors) the paper by GF, Annan, Schmidt and Mann which had been submitted to JGR on the 10th; the paper attacked Stephen Schwartz’ paper in JGR before that had even appeared; Tamino’s graphs required direct access to the data in GF et al, and it would certainly be very odd for Gavin Schmidt to commission the guest posting if not from his co-author, who at one point uses the term “we” confirming that “Tamino” was the lead author. There is no harm in any of this, but you are wrong to accuse VS of outing GF (or Halpern, long known to be most likely the Rabbett). What is reprehensible is the way “tamino” hides who he is from most of his readers when maligning others who do use their real names (like Anthony Watts to name just one). One suspects the real reason for GF to continue modestly using his Tamino soubriquet is that he has very little to be modest about.

  199. Tim Curtin Says:

    It is surprising to find Bart (March 12 at 14.11) citing against me the claim by a certain Barton Paul Levenson that the correlation coefficient between global mean temperature and CO2 is an amazing 0.87. Alas, BPL’s frequent contributions to the Deltoid blog all too often betray a lack of statistical training, and his “results” as cited by Bart have never been published, let alone peer reviewed. The same appears to be true of Bart’s own training, as BPL’s use of ln(CO2) instead of CO2 itself whilst approved by Bart is a nonsense, and does nothing to improve the true outcome, even if use of ln is apparently what the IPCC does when deriving its “radiative forcing”
    Anyway, actually the adj R2 is 0.64 for your source’s 1880-1998 series, and 0.76 on his data from 1880 to 2007, but that is without setting the constant = 0, as one should. Doing that, the adj R2 vanishes to MINUS 0.05 (for 1880-2007) and the coefficient ceases to be statistically significant both for actual CO2 and for ln(CO2).
    In short, to cite BPL’s absurd “result” when it is in flagrant disregard of all the high-powered tests for spurious correlation cited elsewhere on this blog is astonishing, his “result” does not even pass the Durbin-Watson test.
    All the same Bart, your Blog has otherwise been rewarding for many of us.

  200. Bart Verheggen Says:

    Tim Curtin,

    The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm), and the same relation holds over a certain interval of concentrations, but not all the way down to zero where the relation becomes close to linear I believe). Thus doing a correlation while forcing the intercept to zero would be wrong, and the way BPL did is correct as a first approximation.

    Also, we’re talking about *global* warming, so correlations for individual locations don’t interest me much.

    Al Tekhasski,

    The earth climate remains constant if in- and outgong radiation equal each other, and it changes when there’s an imbalance, which is currently the case, in line with what would be expected from an enhanced greenhouse effect (ie more infrared being reflected both to the surface and to outer space).

    Please refrain from setting up strawmen arguments and making broadbrush accusations of scientists; I’m not interested.

  201. dhogaza Says:

    Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007 he proceeded to plagiarise (if he was not one of the authors) the paper by GF, Annan, Schmidt and Mann

    Schmidt invited him as a guest to discuss one of Schmidt’s papers, and you’re accusing him of *plagiarism”?

    Fortunately, everyone with a three-digit IQ knows that Tim Curtin is

    1. a liar

    2. ignorant

    3. a fool

    so no harm is done. I’d be wary of libeling people in the UK if I were you, though.

  202. VS Says:

    And if you had that ‘3 digit IQ’ yourself, you might actually be able to read:

    ” to plagiarise (if he was not one of the authors) ”

    Now bugger off, troll.

  203. dhogaza Says:

    Here’s the post Tim Curtin’s referring to.

    It is posted by Tamino, and no where does it reveal Tamino’s real identity.

    Therefore, this:

    Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007

    would appear to be a false statement.

  204. VS Says:

    Which part of ‘bugger off troll’ do you fail to understand? The ‘bugger off’ or ‘troll’?

  205. dhogaza Says:

    VS lives! He doesn’t answer any of he questions put to him, or rebut any of the posts showing that he’s ignorant of climate science, but he lives!

    Definition of plagiarism:

    v. tr.
    To use and pass off (the ideas or writings of another) as one’s own.

    To appropriate for use as one’s own passages or ideas from (another).

    Note that “plagiarism is whatever Tim Curtin decides it is” is not one of the dictionary definitions.

    Got any proof that when Schmidt invited Tamino to guest post that Tamino plagiarized Schmidt?

    I didn’t think so.

    The fact that you’re impressed with serial liars like Motl and Curtin says a lot, VS.

  206. dhogaza Says:

    Which part of ‘bugger off troll’ do you fail to understand? The ‘bugger off’ or ‘troll’?

    Which part of “this is not your blog” do you fail to understand?

    Bart, looks like VS is backed into a corner and is fighting like a rabid bat …

  207. dhogaza Says:

    And by the way, VS, you are more than welcome to chase the link I provided and to prove that Tim’s telling the truth when he says that Tamino “outed himself” in that post, or the thread that follows.

  208. jfr117 Says:

    “He doesn’t answer any of he questions put to him, or rebut any of the posts showing that he’s ignorant of climate science, but he lives!”

    dhogaza…are you serious? vs has engaged every comment that i have read in some way, shape or form. i don’t know how he has the time do it. it took me four hours to read this entire thread yesterday.

    to his credit he has tried to remain to the point, as much as you can in what is the wild west. to pretend he has not brought forth pertinent points is either dishonest or you haven’t read the whole thread.

    vs has raised an interesting statistical question. since tamino has spent years considering the temperature series via his statistics – to suddenly say that statistics don’t matter when they raise a potentially different conclusion raises questions to ‘skeptics’ since that changes the goalposts when the answer changes.

    i have no idea who tamino is, but wouldn’t the real grant foster have an issue being labeled as tamino if he was in fact, not tamino? i assume the real grant foster is in the ‘climate science’ field (whatever that means) and is aware of tamino’s blog.

  209. VS Says:

    Bart,

    Inderdaad, breng dit alsjeblieft even onder controle. Ik dacht dat we een normaal gesprek aan het voeren waren en dan komt plotseling een hele horde van dit soort types langs (het had met de Tamino link te maken, geloof ik, daarvoor ging alles min of meer prima).

    Ze voegen absoluut nul toe aan de discussie, en blijven maar iedereen beledigen, onder de aanname dat ze jou als ‘rug’ hebben (zie ook hierboven). Ik kan mij niet voorstellen dat jij het daar mee eens bent. Wij zijn het weliswaar niet met elkaar eens, maar een biertje op een terrasje zou toch echt naar binnen kunnen gaan. Dat kan ik niet over deze vent zeggen.

    Kijk bijvoorbeeld even goed naar zijn ‘contributies’.

    Overigens, je zou ook even kunnen kijken naar mijn laatste antwoord op de post van Tamino, hier op jouw blog. De man heeft een aantal fatale fouten gemaakt in zijn analyse. Een tweedejaars statistiek student zou ze eruit kunnen vissen (en ze zijn eruit gevist: enthropy measure cherry-picking deluxe.. fout uitleggen wat een I(1) vs AR(1) proces is..).

    Ter wille van een gezond debat, zou ik het erg op prijs stellen als je daar nog even op zou kunnen reageren, inhoudelijk. Iedereen blijft het negeren.

    Ik zal zeker nog even op jouw posts reageren, maar al mijn tijd gaat op aan mensen zoals deze dhogazza hier. Niemand schijnt het ook ‘erg’ te vinden.

    Hij vindt immers dat ik op zijn commentaar in moet gaan terwijl hij begint met roepen dat ik op zijn ‘shit list’ sta. Wat moet ik daar echt mee?

  210. Bart Verheggen Says:

    Behave yourselves, please. No namecalling. May I also remind you that only the host (i.e. me in this case) can tell someone to leave.

  211. dhogaza Says:

    dhogaza…are you serious? vs has engaged every comment that i have read in some way, shape or form.

    Where has he commented on my post regarding Motl’s post that VS references?

    I’ve missed it.

    Thanks in advance for the link …

    vs has raised an interesting statistical question. since tamino has spent years considering the temperature series via his statistics – to suddenly say that statistics don’t matter

    If one has a large body of observed physical evidence, and someone comes along and says “I’ve statistically proven that this observed physical evidence can’t be true”, it’s reasonable to say “your statistical analysis is most likely wrong” (not “don’t matter” – WRONG).

    Because the alternative is that our observations aren’t real, which is silly.

  212. dhogaza Says:

    i have no idea who tamino is, but wouldn’t the real grant foster have an issue being labeled as tamino if he was in fact, not tamino? i assume the real grant foster is in the ‘climate science’ field (whatever that means) and is aware of tamino’s blog.

    He’s a professional statistician. The identification is correct, however the point is that “outing” someone’s real name who chooses for whatever reason to post anonymously is rude. Or worse.

    It’s also a favorite trick of certain people in the denialsphere.

    For instance, in my case, over at dotearth, someone posted not only my real name BUT PART OF MY CLIENT LIST.

    You can get that off the net easily enough – I have nothing to hide – but people do this as a form of intimidation. You know … “if you post here at this blog, I’ll reveal your name, some of your clients, etc, potentially exposing your clients to the type of abuse and harassment that characterizes denialist tactics”.

    It’s just wrong.

  213. dhogaza Says:

    Google translate’s version of VS’s post:

    Yes, please bring this under control here. I thought we were conducting normal conversation and then suddenly a whole host of these types along (it did with the link to Tamino, I think, before everything went more or less fine).

    They add absolutely zero to the discussion, and continue to insult anyone but the assumption that they like you ‘back’ (see also above). I can not imagine you would agree with you. We are although not in agreement, but a beer on a terrace would really be able to go inside. I can not say about this guy.

    Look as good example to his “contributions”.

    Incidentally, you could also just be looking at my last reply to the post of Tamino, here on your blog. The man has a number of fatal errors in his analysis. A half-year statistics student would be able to fish them out (and they are caught out: enthropy measure deluxe cherry-picking mistake .. I explain one (1) vs AR (1) process ..).

    To ensure a healthy debate, I would really appreciate it if you could comment on just yet, content. Everyone continues to ignore.

    I will certainly also react to your posts, but my time is spent on people like this dhogazza here. Nobody seems too ‘bad’ way.

    He thinks his comment is that I should go in as he starts calling me his “shit list” sta. What am I really that?

    Dhogaza’s shorter version:

    Please make these people go away so I don’t have to answer as to why I think my statistical treatment trumps physics, or why my statistical argument is more valid than Tamino’s.

    As far as insults go, VS, all along you’ve dismissed Tamino as not understanding first-year stats, when of course we know that Tamino makes his living doing this stuff. Time series analysis is all that he does.

  214. jfr117 Says:

    i had somehow missed a lot of the interaction from yesterday. a bunch of posts that i see now, i didn’t see yesterday. guess i have to read again. my bad.

    but if the statistics are right (tbd) then the theory needs to be reworked. that’s how science works. concensus and all.

  215. dhogaza Says:

    At this point, perhaps the best thing would be for VS to work his analysis up into publishable form, and find a suitable venue. My guess is he won’t get anywhere in any journal related to the physical sciences (since his results are unphysical), but I imagine he’ll have no problem getting it published in an economics journal.

  216. dhogaza Says:

    i had somehow missed a lot of the interaction from yesterday. a bunch of posts that i see now, i didn’t see yesterday. guess i have to read again. my bad.

    No problem, none at all …

    but if the statistics are right (tbd) then the theory needs to be reworked.

    I don’t think you fully understand. If the statistics are right and there’s no actual trend in the observed temperature data, then we need to explain why the MSU and later AMSU data is all screwed up. Why the surface temp record is all screwed up. Why ecosystems are moving north (albeit their parts aren’t moving north in synchronized fashion, which is a real problem). Why measurements of IR radiation at the top of the atmosphere, taken by satellites, matches theory. Why water vapor response, measured by satellite, matches model results. Why downwelling IR measurements match theory.

    We’ll have to explain why all the extra energy being retained in the climate/earth/ocean system is … magically disappearing. There’s nothing in physics that allows it.

    You have to really believe that VS has made a very large part of physics *and* physical observations of related phenomena disappear in one big POOF!

    I say that’s unlikely …

  217. Bart Verheggen Says:

    VS,

    You chose whether you want to engage someone or not, also if (s)he “requests an answer” (in reply to “Hij vindt immers dat ik op zijn commentaar in moet gaan terwijl hij begint met roepen dat ik op zijn ’shit list’ sta. Wat moet ik daar echt mee?”)

    All,

    I don’t like the insults going over to each other, but there’s more things I don’t like. Al T’s conspiracist comment rubbed me the wrong way; veiled accusations dito (perhaps even more than clear ones out in the open). Unveiling the identity of people who wish to remain anonymous is rude.

    Everybody just try to be a little nicer than you really want to be (after mt). Count to ten and all that. Engage with substance, not with namecalling (or don’t engage). If this ends in a foodfight I’ll close the comments.

  218. VS Says:

    The results we’re debating now have already been published.

    Read the thread.

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – use I(1) for GLOBL (Temp var.)

    You’re a troll.

  219. VS Says:

    Didn’t see your comment Bart.

  220. jfr117 Says:

    @dhogaza – i believe vs’ original thesis was to rebuke bart’s assertion of statistically significant warming. not that it isn’t, or hasn’t warmed; just that it might be insiginficant if treated differently on a statistical basis. if that is true, and motl’s description does make more sense than treating temperature series as a linear trend plus white noise, then to me the implication is this: yes we have warmed, but it not significant within the modern record. that combined with the NAS acknowledgment of mann’s reconstruction is valid back to 400 years should help us conclude that: yes its warm but it may still be well within the bounds of natural variability.

    therefore, there is no attack on physics. just reframing what we are looking at and putting it into another perspective. the data is right, the co2 theory may even be right but natural variability is large and something we need to understand better.

    to make vs have to explain ‘your’ theory because of his conclusion just doesn’t make sense to me.

    @ vs
    i would recommend you stick to what you started with – statistics. i enjoyed your earlier posts but recently that message has been diluted. who cares who anybody is. if they wanted us to know, then they would tell us. please respond to actual scientific discourse though and ignore the insults. i recognize your contribution and would ask you to please continue. although i understand it must be difficult fighting off everybody.

  221. Bart Says:

    jfr117,

    Thanks for a thoughtful comment and bringing the discussion back to contents.

    VS postulated that by purely inspecting the numerical values (i.e. without physical meaning attached to them), their increase is indistinguishable from a random walk. This seems to depend on choices made in the statistic, but even if we accept that as being true, the question remains, was the increase in fact unforced? Showing that numerically it could have been doesn’t mean that it was.

    In my newer post I argue on physical grounds that it wasn’t random, but was in fact forced. Namely, the hypothesis of unforced variability is inconsistent with the observations of a positive radiative balance at the top of the atmosphere, and with the observation of increased heat content/signs of global warming in other metrics (ocean heat content, Arctic sea ice, ice sheets, glaciers, ecosystems, etc).

    The point I was making at the end of this post was that statistically there is no reason to assume that the long term trend of increasing temperatures has stopped or reversed since 1998. It would be slightly ironic if people who have trumpeted the “global warming stopped since 1998” canard (Lubos perhaps? Haven’t checked recently) would now claim that it’s all a random walk anyway. I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998” claim.

  222. Alex Says:

    Over at Tamino’s blog I posted my results of the augmented Dickey-Fuller test, conducted with several selection criteria to illustrate that it is not so obvious that the null hypothesis of a unit root is rejected or not. Tamino pointed out that the Phillips-Perron test does reject the null hypothesis and when I checked this I got the same result. This, the way I see it, just supports what I was trying to point out. Simply on the basis of statistical testing we can neither accept nor reject the hypothesis of a unit root. This is not very satisfying, but we should of course not reject (or accept) the presence of unit because of this dissatisfaction.

    I have seen several people, both at this blog and at Tamino’s, talking about random walks and unit roots as if they are the same thing. This is not the case. A random walk (as explained at Tamino’s page) is a simple model, which has a unit root. However, there are many (more complicated) models which have a unit root, but are not a random walk. So a random walk is just a model with this property, but not every model with this property is a random walk. The reason why it is so important to check whether the series contains a unit root is that if it does, many of the ‘standard’ statistical techniques are invalid, which might lead to false conclusions. I hope this clarifies why some of us put so much emphasis on the possible presence of a unit root, but that this is not the same as saying that temperature is a random walk.

  223. dhogaza Says:

    if that is true, and motl’s description does make more sense than treating temperature series as a linear trend plus white noise

    Again, it’s a strawman, no one claims it’s white noise …

    Read this post entitled “How long?” by Tamino, for instance.

    Search Real Climate for “autocorrelation” and you’ll find a ton of references.

    This should make clear the strawman nature of Motl’s implication that climate science treats climate data as trend+white noise.

    In Motl’s case, he knows this. I will let you draw your own conclusion as to why he writes this way despite knowing this.

    In VS’s case, I suspect he believes the implication made by Motl to be true, since it was he who linked to Motl in the first place. I don’t think VS understood that Motl’s piece builds a small army of strawmen to shoot down.

    At least I hope he didn’t …

  224. jfr117 Says:

    @bart

    this posting has had many good contributions and, except for the last day or two, a real advancement of information. i think (emphasized) that the term ‘random walk’ is confusing the issue. i don’t think that is how we should view temperature data and i don’t think that is what has been proposed to do – but statistics that may be applicable to this kind of data is most commonly associated with random data. thus we have been associating temperature = a random walk. but in fact we are only using statistics designed to handle this kind of data. in other words: the use of random-walk statisical assumptions for temperature does not necessarily mean that temperature is physically described as a random walk. is this correct?

    if this is true, then again, no attack physics. just another way to statisically look at the data. temperature is what is – statistics help us to view it through different lenses.

    alex and vs, you seem to be the statistical gurus here. is this correct?

    obviously the temperature has increased due to a forcing – that is true. the question vs

  225. Bart Says:

    Thanks Alex, that was helpful.

    You wrote:

    “if it does (contain a unit root), many of the ’standard’ statistical techniques are invalid, which might lead to false conclusions.”

    So *if* the temp series contain a unit root, how would that influence the OLS trend I calculated? My guess (corroborated by Tamino) is that the actual trend estimate wouldn’t be much different, but the error of that estimate would be larger. Calling such a trend nonsense and misleading (as per VS’ first post) seems too strong of a pronouncement.

  226. dhogaza Says:

    VS, a question … over a Tamino’s you said …

    Look at the temeprature series over the past couple of [hundred] thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…

    I’ve added in the word “hundred” because later you said that’s what you meant.

    Why do you think this is significant to analyzing what’s hapeening on century timescales?

  227. Adrian Burd Says:

    Alex,

    Many thanks for a very useful and clear explanation. Am I correct in assuming that the Dickey-Fuller test and the Phillips-Perron test have different assumptions behind them? Are they indeed testing the same thing with the same background assumptions? If so, can one argue that the fact one gets different results from the two tests say something more about the tests than the data?

    I wonder if you could elaborate on your statement to the effect that the presence of a unit root invalidates many of the standard statistical tests. Which types of statistical test are invalidated and how are they invalidated?

    Many thanks,

    Adrian

  228. Tim Curtin Says:

    1. Dhogaza: I see that you have called me a liar and worse, on March 14, 2010 at 15:48 “Fortunately, everyone with a three-digit IQ knows that Tim Curtin is
    1. a liar
    2. ignorant
    3. a fool
    so no harm is done. I’d be wary of libeling people in the UK if I were you, though”.
    You beauty, dhogaza, taking care to avoid libel laws here in Australia with your anonymity. For the record, I have never libelled anyone in UK USA or here, as that is not my style.
    But Bart, despite your penchant for censuring VS et al. for using such language, while as ever dhogaza here and everywhere always gets away with it, you as publisher remain liable, as Dow Jones found not long ago in a case brought against them in Melbourne for an article in Barrons published in USA but circulated here. IF I were to be that litigious, watch out. But to be on the safe side, I suggest you either get dhogaza to retract and apologise or else ban him, just as I am banned from Deltoid for using straight talk against him and ilk.
    [edit. No more talk of anonymity issues or other “whodunnit” stuff. BV]
    However jfr117 being anonymous does not attract dhogaza’s vitriol!
    Dhogaza’s other accusations against me are also defamatory, and Bart + his service provider need to be more careful, and at least warn d. to mind his language – or out himself, so that his targets can reply in kind.
    [Reply: Keep your threats to yourself. BV]

  229. Tim Curtin Says:

    Bart (at March 14, 2010 at 15:02) Again you astonish me! You said: “The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm (sic)).” First, your parenthesis is I fear wrong (enough so for dhogaza to be very rude about you if he understood it himself, but he does not!). The doubling of atmospheric CO2 is ALWAYS stated in terms of parts per million by volume (ppmv), namely from 280 ppmv in 1750 or 1900 (the end-year of “pre-industrial” seems to be variable), so doubling means 560 ppmv for just [CO2], and more if other GHGs are included, but still in ppmv (the Stern and Garnaut reports say we had reached 450 ppmv of CO2-equivalent by 2005, so doubling for them from 2005 means 900 ppmv CO2-e by 2060 (Garnaut Fig.4.4)).
    Yes, Arrhenius stated the effect of atmospheric CO2 would be about logarithmic, and showed lesser extra warming for a 100% increase in [CO2] from the 1900 level than for a 50% increase. The scientists favoured by the IPCC do not accept this, as in AR4 WG1 which infamously claims that doubling will be achieved by 2047 (A2 scenario) with warming of 3oC (+/-1.5) by then (WG1, Fig.10.20 (a)), when the actual c40% rise in [CO2] since 1900 has been associated with only a rise of 0.7oC in GISS global mean temperature from 1900 to 2000. The IPCC’s teams defend this inconsistency by claiming that aerosols counteracted the effects of the 40% rise in [CO2] between 1900 and 2000, and that these aerosols are no longer present, hence the 3oC for the extra 60% of [CO2] from the 1900 baseline. Clearly almost none has noticed the brown haze that spreads all the way from Shanghai and Beijing to Kabul and further west. Those that have argue for increasing aerosols and their benevolent haze (see Nature 1077-1078, 30 April 2009).
    BTW, Bart, NONE of the IPCC’s projections are based on the kind of econometric techniques discussed by VS, Al T, and others here. Have you ever seen any econometric work at Foster/Tamino’s? His times series are purely arithmetic.

    As for setting the constant at zero, Levenson’s correlations where that is not the case fail the Durbin-Watson test (if they did not he would be on the front cover of IPCC AR5), and when set at zero, there is no correlation, not even spurious. It is for you to explain why the constant should not be zero, and why Levenson did not test for unit roots. My own guru on these matters states “The unwanted consequence of allowing Excel to compute a non-zero intercept is to introduce an additional ‘linear trend estimator’ along with the other specific regressions of Temperature on CO2”. This “stranger at the feast” helps to explain the gross exaggeration of Levenson’s finding.
    The problem is of course easily fixed as I have done, by re-running the regressions on the first differences of the data exercising the Excel option to force a = 0. End of your and Levenson’s nice story.

  230. VS Says:

    CORRECTION REGARDING THE KPSS SECTION:

    There is a glaring mistake in the post (I typed it quickly and in one pull, while ‘flipping around’ null hypotheses.. I guess it helps to proof-read)

    It regards the KPSS test statistics.

    The stationarity (i.e. hypothesis of NO unit root) of the GISS-all series is in fact rejected in most cases at the 5% and 10% significance level, but not at the 1% significance level.

    The KPSS test results should read like this:

    ========================

    Critical values:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    So once the Lagrange Multiplier (LM) test statistic is ABOVE one of these values, STATIONARITY is rejected at that significance level.

    Newley-West bandwith selection:
    TEST STATISTIC: 0.165696
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.154875
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    PARZEN KERNEL:

    Newley-West bandwith selection:
    TEST STATISTIC: 0.147904
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.130705
    Conclusion, stationarity is not rejected at 1% and 5% significance levels. Rejected at 10% significance levels.

    ========================

    The discussion in fact more or less remains the same, and I still must note the small sample properties of the KPSS test statistic, which is asymptotic.

    My sincere apologies for any confusion.

    The test outcomes are furthermore confirmed by Kaufmann (see references first post).

  231. VS Says:

    The conclusion should then read:

    ADF: Clear presence of a unit root
    KPSS: Presence of unit root detected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root, but only when using (3) as an alternative hypothesis (this is robustness issue)
    DF-GLS: Clear presence of a unit root

  232. VS Says:

    So, finishing up the KPSS section, because it’s flawed as it is written now (again, apologies for the confusion, it seems I confused myself there at one point :)

    CORRECTED VERSION

    Let’s now try to interpret the results of the KPSS test.

    We see that the null hypothesis of NO unit root, is rejected at 10% for all methods used, and 5% in most cases. At a 1% significance level, it is however not rejected.

    Two things to note:

    (1) The test is asymptotic, so the critical values are only exact in very large samples

    (2) The null hypothesis in this case stationarity, and the small sample distortion severely reduces the power of the test (the power is the ‘inverse’ of the probability of a Type II error). In other words, the test is biased towards NOT rejecting the null hypothesis in small samples.

    However, in spite of this small-sample bias, we nevertheless manage to reject the null hypothesis of stationarity in all cases, at a 10% significance level and in all but one case using a 5% significance level. I conclude that there is strong evidence, when testing from ‘the other side’, and minding the small sample induced power reduction of the test (i.e. the fact that it is biased towards not rejecting stationarity in small samples), that the level series is NOT stationary.

    I(0) is therefore rejected.

  233. VS Says:

    CORRECT VERSION OF POST

    ============================

    Bart, it would be great if you could delete the previous couple of posts, because they might be confusing. I was doing three things at the same time, and somewhere in between I lost focus when dealing with the KPSS.
    [Done. BV]

    The official version of my statistical argument is given HERE:

    ============================

    Hi everybody,

    The debate has certainly become heated, and I apologize for my contribution to making it so.

    I tried, in this thread, to be as forthcoming as possible, but the plethora of insults finally got to me. Some of the people posting here have been posting very nasty stuff on other blogs when referring to me. The final straw however, was the lambasting of participants in this discussion whose contributions I am actively enjoying (for example, the completely reprehensible bashing of Tim above).

    However, this is a spit fight I promised myself I wouldn’t engage in, and I’m sorry for any offence. I hope we can keep it scientific and argumented now.

    Let’s try to bring this discussion back on track.

    I also want to thank Alex for clearing up the unit root/random walk difference. I have mentioned it in one of my many posts, but it got lost in the debate, and I should have stressed it more. Alex is 200% correct in stressing this distinction. I will however allow him to elaborate on that further, if he sees fit.

    Allow me to show the line of my statistical argument now (warning, it’s around 2500 words).

    ————————–

    I will show all the steps taken in the process of establishing the I(1) property of temperature series. I will list all test results, motivations, and decisions. This way Alex, or anybody else for that matter, will be able to inspect them.

    I will use the GISS-NASA combined surface and sea temperature record that I downloaded from their website. I will resort to this series, because everybody seems to be using it in this discussion. However, I have to stress that more or less the same results are established using HADCRUT or CRUTEM3 (or the GISS-NASA land only) temperature records.

    ————————–

    TESTING THE I(1) PROPERTY

    ————————–

    We start by examining the GISS-NASA temperature series 1880-2008 (GISS-all). We want to see whether the series contains a unit root. As mentioned here, and on various other places, the presence of a unit root in a time series invalidates regular statistical inference (including OLS with AR terms) because the series is no longer stationary (this is a necessary condition).

    Definition stationarity (from wiki):

    http://en.wikipedia.org/wiki/Stationary_process

    “In the mathematical sciences, a stationary process (or strict(ly) stationary process or strong(ly) stationary process) is a stochastic process whose joint probability distribution does not change when shifted in time or space. As a result, parameters such as the mean and variance, if they exist, also do not change over time or position.”

    ————————–

    AUGMENTED DICKEY FULLER TESTING

    ————————–

    I start with applying the Augmented Dickey Fuller test. The definition (and purpose) of the ADF is given here, again on wikipedia:

    http://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test

    I stress this part of the definition:

    “By including lags of the order p the ADF formulation allows for higher-order autoregressive processes. This means that the lag length p has to be determined when applying the test. One possible approach is to test down from high orders and examine the t-values on coefficients. An alternative approach is to examine information criteria such as the Akaike information criterion, Bayesian information criterion or the Hannan-Quinn information criterion.”

    The ADF can be applied in different forms, depending on how you want your alternative hypothesis to look like. The null hypothesis is the presence of a unit root. The alternative hypothesis (determining the specification of the test-equation) can be:

    (1) no intercept
    (2) intercept
    (3) intercept and trend

    I will focus on (3) here, because this is the most ‘restrictive’ case and because I have been accused of ‘ignoring’ this alternative hypothesis when arriving to my test results. It also corresponds to what has been posted here and elsewhere as the probable alternative hypothesis. Do note however, that the results given below are *much* more conclusive in cases (1) and (2).

    I will furthermore use all the information criteria (IC) available to me to arrive at the required lag length (‘p’ in quote above, I will refer to it as ‘LL’ below) in the ADF test equation.

    Hypothesis specification:

    H0: GISS-all contains a unit root
    Ha: GISS-all is trend stationary (testing against case 3)

    NOTE: All residuals of test equations have been tested for normality via the Jarque-Bera test for normality (the p-value is reported as JB below), and in all cases the null hypothesis of normality is not rejected. The ADF test, under the assumption of normality of residuals, is then exact. For a definition of this normality test, see here:

    http://en.wikipedia.org/wiki/Jarque_bera

    ADF test results:

    IC: Akaike Info Criterion (AIC)
    LL: 3
    p-value: 0.3971
    Conclusion: presence of unit root not rejected
    JB: 0.393560

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 0
    p-value: 0.0000
    Conclusion: presence of unit root rejected (I will get to this below, bear with me)
    JB: 0.202869

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 3
    p-value: 0.3971
    Conclusion: presence of unit root not rejected
    JB: 0.393560

    IC: Modified Akaike
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    IC: Modified Schwartz
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    IC: Modified HQ
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    Now, we see that using the ‘BIC’ one arrives at a deviant number of lags (namely 0). This warrants further inspection. Note that the purpose of the lag length is to eliminate all residual autocorrelation, so that the ADF tests can function properly.

    In order to inspect this issue, we compare the residuals of the test equations with 0, 3 and 6 lags respectively. Here I report the Q statistics for the first 10 lags in the residual series. The Q statistic is used to determine the presence of residual autocorrelation. A more detailed explanation is given here:

    http://en.wikipedia.org/wiki/Ljung%E2%80%93Box_test

    I quote, for those with no time to ‘click’ ;), the following:

    “The Ljung–Box test is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the “overall” randomness based on a number of lags, and is therefore a portmanteau test.”

    0 Lags in test equation:

    0.447
    0.683
    0.858
    0.102
    0.161
    0.159
    0.215
    0.168
    0.178
    0.081

    3 Lags in test equation:

    0.862
    0.885
    0.953
    0.983
    0.912
    0.950
    0.938
    0.837
    0.854
    0.731

    6 Lags in test equation:

    0.939
    0.997
    1.000
    0.999
    0.998
    1.000
    1.000
    0.999
    0.999
    0.989

    So, once we use the BIC determine lag length, our residuals are very messy (i.e. borderline significances etc. See first sequence Ljung-Box Q-statistics). Higher numbers of lags however, especially 6, successfully eliminate all traces of residual autocorrelation. Note also that the condition that the residuals of the test equation are normal is least solid when using the BIC for lag selection.

    Both conditions are necessary for the ADF to function properly.

    By using statistical diagnostic measures, we can therefore safely disregard the deviant lag length arrived at via the BIC, and use one of the other measures (so either AIC or HQ, or the modified versions of all three, so basically any IC except the BIC/SIC).

    Our ADF-based inference is coming to a closure. We now need to proceed to test for the I(1) versus I(2) property of the GISS-all series, in order to make sure temperature is not I(2). Again, we perform the tests, now on the first difference of GISS-all, or D(GISS-all).

    For the sake of readability (and because we still have a bunch of other tests to do) I will only report the p-values of the test using the remaining 5 ‘untainted’ IC’s. The IC implied lag length will be again be reported as ‘LL’.

    VERY IMPORTANT NOTE: The alternative hypothesis, in the first difference series will now be intercept (or drift) instead of intercept and trend. So this is case (2). The reason for this is that an intercept in the first differences immediately implies a trend in the level series. Again, as above, I am giving the ‘deterministic trend hypothesis’ the benefit of the doubt (contrary to what has been claimed elsewhere).

    ADF test results, for D(GISS-all):

    IC: Akaike Info Criterion (AIC)
    p-value: 0.0000
    LL: 4
    Conclusion: presence of unit root rejected

    IC: Hannan-Quinn Info Criterion (HQ)
    p-value: 0.0000
    LL: 2
    Conclusion: presence of unit root rejected

    IC: Modified Akaike
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    IC: Modified Schwartz
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    IC: Modified HQ
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    So, using the ADF, we do not reject the presence of a unit root in the level series. However, once we difference the series, the unit root is rejected in all instances. We therefore conclude that the ADF test implies that GISS-all is in fact I(1).

    Now, let’s turn to other tests.

    ————————–

    KWIATKOWSKI-PHILLIPS-SCHMIDT-SHIN TESTING

    ————————–

    The careful read has probably noted that the null hypothesis of the ADF test is that the series actually contains a unit root. One might argue that, due to the low number of observations in the series, or simply bad luck, this test fails to reject an untrue null-hypothesis, namely that of an unit root, in the level series. In other words, the possibility that we are making a, so called, Type II error.

    We can however test for the presence of a unit root, by assuming under the null hypothesis that the series is actually stationary. The presence of a unit root is then the alternative hypothesis. In this case we ‘flip’ our Type I and Type II errors (I’m being very informal here, the analogy serves to help you guys ‘visualize’ what we are doing here).

    To do that, we use a non-parametric test, the KPSS, which does exactly that. Namely, it takes the null hypothesis as being stationarity around the trend, and the alternative hypothesis is the presence of a unit root.

    See also: http://en.wikipedia.org/wiki/KPSS_tests

    “In statistics, KPSS tests (Kwiatkowski-Phillips-Schmidt-Shin tests) are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend.”

    IMPORTANT NOTE: The KPSS test statstics’ critical values are asymptotic. Put differently, the test is exact only when the number of observations goes to infinity. The ADF on the other hand, is exact in small samples under normality of errors (that we tested for above using the JB test statistic).

    KPSS Test result, for two different bandwidth selection methods. The spectral estimator method employed is the Bartlett-kernel method.

    The asymptotic (!) critical values of this test statistic are:

    Critical values:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    So once the Lagrange Multiplier (LM) test statistic is ABOVE one of these values, STATIONARITY is rejected at that significance level.

    Newley-West bandwith selection:
    TEST STATISTIC: 0.165696
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.154875
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    PARZEN KERNEL:

    Newley-West bandwith selection:
    TEST STATISTIC: 0.147904
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.130705
    Conclusion, stationarity is not rejected at 1% and 5% significance levels. Rejected at 10% significance levels.

    Let’s now try to interpret the results of the KPSS test.

    We see that the null hypothesis of NO unit root, is rejected at 10% for all methods used, and 5% in most cases. At a 1% significance level, it is however not rejected.

    Two things to note:

    (1) The test is asymptotic, so the critical values are only exact in very large samples

    (2) The null hypothesis in this case stationarity, and the small sample distortion severely reduces the power of the test (the power is the ‘inverse’ of the probability of a Type II error). In other words, the test is biased towards NOT rejecting the null hypothesis in small samples.

    However, in spite of this small-sample bias, we nevertheless manage to reject the null hypothesis of stationarity in all cases, at a 10% significance level and in all but one case using a 5% significance level. I conclude that there is strong evidence, when testing from ‘the other side’, and minding the small sample induced power reduction of the test (i.e. the fact that it is biased towards not rejecting stationarity in small samples), that the level series is NOT stationary.

    I(0) is therefore rejected.

    ————————–

    PHILLIPS-PERRON TESTING

    ————————–

    Unlike the ADF, the Phillips-Perron test doesn’t parametrically deal with autocorrelation. Instead, the test statistic is modified directly to robustly account for it. Furthermore, this makes the test robust to heteroskedastitcity (varying variance). However, as always with robust tests, these modifications reduce efficiency if these ‘robustness corrections’ are in fact not needed. This is however a very lengthy discussion and I’ll leave it there for now.

    Let’s take a look at those PP test results then, shall we. We begin by taking case (3) again, so our test equation contains both an intercept and trend. The test results reject the presence of an unit root:

    Phillips-Perron test on GISS-all, Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC -5.744931

    1% level, -4.031899
    5% level, -3.445590
    10% level, -3.147710

    Conclusion: the presence of a unit root is rejected

    Now, let’s, just for the sake of sensitivity analysis, test with using just an intercept (and no trend) in the test equation.

    Ha: Intercept (case (2))

    TEST STATISTIC: -1.555403 (p-value 0.5024)

    1% level, -3.482453
    5% level, -2.884291
    10% level, -2.578981

    Conclusion: the presence of a unit root is NOT rejected

    Just like it was claimed elsewhere, and confirmed by Kaufmann and Stern (2000), the PP test results lead us to conclude that the series is I(0), when setting the presence of a trend as the alternative hypothesis. Setting simply an intercept in the alternative, in fact fails to reject the presence of a unit root.

    ————————–

    DICKEY FULLER GENERALIZED LEAST SQUARES TESTING

    ————————–

    Our final set of tests will concern the DF-GLS tests, which are similar, but not the same as the ADF tests. Again, we will use (3) as the alternative hypothesis, and we will use all available IC’s to derive the required lag length.

    DF-GLS test results:

    The critical values of the relevant Elliott-Rotherberg-Stock DF-GLS test statistic are given below:

    1% level, -3.551200
    5% level, -3.006000
    10% level, -2.716000

    IC: Akaike Info Criterion (AIC)
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Schwartz / Bayesian Info Criterion
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Modified Akaike
    LL: 6
    TEST STATISTIC: -1.065158
    Conclusion: presence of unit root not rejected

    IC: Modified Schwartz
    LL: 5
    TEST STATISTIC: -1.305844
    Conclusion: presence of unit root not rejected

    IC: Modified HQ
    LL: 6
    TEST STATISTIC: -1.065158
    Conclusion: presence of unit root not rejected

    Again, just like in the case of the ADF test series, we do not reject the presence of a unit root, when using (3), i.e. linear trend and intercept, as our alternative hypothesis, Ha. In this case, even the SIC/BIC measure points to the use of 3 lags, and is in line with both the HQ and AIC.

    If we move on to the first difference series, the presence of a unit root is clearly rejected (I won’t bore you again with a series of tests, since this isn’t what we’re debating).

    So on the basis of the DF-GLS test series, using all information criteria, we again conclude that the GISS-all series is I(1)

    ————————–

    SUMMARY AND CONCLUSIONS

    ————————–

    We have now applied a myriad of different methods to check for the presence of unit roots. As you can see, and like pointedly Alex noted, you do actually have to interpret the results.

    ADF: Clear presence of a unit root
    KPSS: Stationarity (no unit root) rejected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root, but only when using (3) as an alternative hypothesis (this is a robustness issue)
    DF-GLS: Clear presence of a unit root

    For me personally, adding all these together (and minding the small-sample properties of the ADF, if the autocorrelation is properly dealt with and the errors are normal), leads me to conclude that the GISS-all series are in fact I(1).

    I do have to ***stress*** here that I’m not the only one who looking at these results draws this conclusion. These tests have been extensively reported in the literature (see references in my first post), by both AGWH proponents and AGWH skeptics, and all conclude I(1).

    A very conservative econometrician or statistician, *might* conclude that the evidence is ‘mixed’, although it leans towards the presence of a unit root. However, if one is THAT conservative, it is truly impossible to conclude, in light of all this evidence, that the series does NOT have a unit root.

    That was my whole point, and this was my statistical argument.

    VS

  234. Pat Cassen Says:

    So, VS, not knowing a Kwiatkowski test from a Karyotype test, I surmise that you have concluded that there is a ‘good chance’ that it is at least possible that GISS-all is a random walk?

    Time to get back to physics?

  235. Alex Says:

    No Pat Cassen, not a random walk, but there is a ‘good chance’ that a unit root is present.

  236. Pat Cassen Says:

    Alex – right, not a random walk – but I thought the point was that ‘no unit root’ = ‘random walk excluded’, and VS’s analysis above allows a unit root so he/she would say ‘random walk possible’?

    (I would say that the physics excludes pure random walk, but that is another matter.)

  237. Nir Says:

    I know next to nothing about stats, but reading what VS wrote, am I correct in understanding that his tests show that there may be a unit root found in temperature records? If so (from wikipedia) that means that the process is non-stationary… But doesn’t that just mean that there’s a trend there, which is exactly what one would expect if warming is taking place?

    Or am I grasping not only the wrong end of the stick, but something that isn’t even a stick?

  238. stereo Says:

    That was my whole point, and this was my statistical argument.

    VS

    Chicken.

  239. Pat Cassen Says:

    Hey VS – ‘way up top you said:

    “In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk.”

    Doesn’t seem to follow from your analysis down here (nor from what we learned from Alex). So what’s your conclusion re: random walk (as in “Is the increase in global average temperature just a ‘random walk’?)

  240. S. Geiger Says:

    thanks everyone for the nice dialog (well, minus a few digressions). Seems to me that VS has established that the temperature data reasonably meet the ‘unit root’ criterion. The Unit Root criterion is a neccessary (but not sufficient) quality for a time series to be a random walk. Is that where we are at?

  241. stereo Says:

    VS has chickened out from taking on Tamino.

  242. dhogaza Says:

    Tim Curtin … I didn’t bother to read his entire post but …

    Bart (at March 14, 2010 at 15:02) Again you astonish me! You said: “The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm (sic)).”

    First, your parenthesis is I fear wrong (enough so for dhogaza to be very rude about you if he understood it himself, but he does not!). The doubling of atmospheric CO2 is ALWAYS stated in terms of parts per million by volume (ppmv)

    I imagine everyone else here understands what bart meant when he said “per doubling of CO2 rather than per ppm”, but for Curtin’s benefit – he means per doubling rather than linear in relationship to ppm.

  243. dhogaza Says:

    Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 0
    p-value: 0.0000
    Conclusion: presence of unit root rejected (I will get to this below, bear with me)

    In fairness, VS should point out that Tamino used BIC because of a typo in an early VS post. It’s not exactly Tamino’s fault that VS made a typo …

  244. dhogaza Says:

    I tried, in this thread, to be as forthcoming as possible, but the plethora of insults finally got to me

    You’ve complained several times of other people making insulting posts, somehow painting yourself as an angelic, non-insulting victim of … bullies?

    Yet your in your post you say this:

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’

    Which is nothing other than an insult to an entire body of science, since the use of quotes indicates a belief that it’s not really science, or at least, is bad science. It’s been pointed out elsewhere that you began your attack on Tamino by essentially accusing him of not understanding first-semester statistics. You’ve made a variety of other insults, as well.

    So, chill with the victim pleading, OK?

  245. dhogaza Says:

    Also, VS analyses the data for 1880-2009. No one claims a linear trend for that period of time, but rather for the last few decades, and indeed Bart, in his post, shows an OLS fit from 1975-present.

    It would seem to me that to show that an OLS fit for that time period is invalid you’d need to analyze the years for which the OLS is actually computed, rather than the data set as a whole.

    Using your previous example of the cyclic nature of climate over time frames of hundreds of years (driven by Milankovich cycles, though several of us have had the impression you’re unaware of it), we’re not interested in time scales for which we have no *physical* basis for expecting a CO2-forced trend.

    We don’t, for instance, expect the CO2-forced trend due to anthropogenic sources to end the cyclic nature of climate over the timescale of Milankovich cycles, but only to affect the amplitude (everything else being equal). This does not mean that there can be no CO2-forced trend causing rising temperatures at a time when the phase of the current Milankovich cycle would lead us to expect a zero, or slightly negative, trend as we ease down the 25,000-50,000 year path towards the next ice age.

    Nor do we expect the trend from 1880-present to be linear – a key question in climate science back in the 1970s was “when will the additional forcing due to exponential increases in anthropogenic sources overwhelm the noise in the system and lead to a distinguishable trend?”.

    So what do you get when you run your analysis for the relevant period for which a linear trend is claimed, as opposed to the longer 1880-present timeframe for which we already knew there was no linear trend and that CO2 would not be predicted by physics to dominating natural variation?

  246. dhogaza Says:

    It would seem to me that to show that an OLS fit for that time period is invalid you’d need to analyze the years for which the OLS is actually computed, rather than the data set as a whole.

    In case it’s not clear, I mean it seems to me that you need to show that the series for 1975-present is quite likely non-stationary.

  247. dhogaza Says:

    Also …

    Also, excuse the authority fallacy (and ensuing ridicule ;), but I’ll trust two Economics Nobel Prizes with my statistics, over some quote coming from a journal who’s editor is so sloppy in statistics that he write things like these in interviews with the BBC:

    “BBC – Do you agree that from 1995 to the present there has been no statistically-significant global warming

    Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”

    Not significant at a 95% significance level? Wow, that’s really not significant… it’s significantly insignificant even ;) (leave the ‘warming’, leave the discussion we had above, I’m simply showing how sloppy he is with statistics)

    Surely VS is aware that the choice of 95% is a more a rule-of-thumb than anything, and is certainly not a result of statistical theory. And that some fields are dropping that arbitrary choice and just reporting p values.

    Fisher himself was known to consider a smaller value as indicating significance in some cases.

  248. dhogaza Says:

    A bit on the history of “statistical significance”:

    http://www.jerrydallal.com/LHSP/p05.htm

  249. dhogaza Says:

    The Wabbett had something to say about this:

    The bunnies tossed back a few beers, took out the ruler and said, hey, that total forcing looks a lot more like two straight lines with a hinge than a second order curve, and indeed, to be fair, the same thought had occurred to B&R


    We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998) (CMR).xvi The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.

    BUT, the period they looked at was 1880 – 2000. Zeroth order dicking around says that any such test between a second order dependence and two hinged lines is going to be affected strongly by the length of the record. Any bunnies wanna bet what happens if you use a longer record???

    Such as one of the proxy reconstructions going back centuries (not one constructed by Mann, just to make VR happy).

    Physics tells us there is a structural break, and if a statistical test on a subset of the available temperature record tells us there’s not, well, try a longer record.

    At which point we’re going to hear a whole lot of hand-waving about the fact that none of the dozen or so existent proxy reconstructions is valid.

    He’s already set up the foundation for this argument in his second post:

    I wouldn’t classify the test results I posted above as ‘torture of the data’; coming from my field, that judgement would be far more applicable to what Mann et al are doing with their endless and statistically unjustified ‘adjustments’ to proxy/instrumental records.

    Not that there’s any truth to the statement …

  250. Alex Says:

    To explain why many statistical tests are invalid when a unit root is present, I will have to introduce some notation commonly used in econometrics. This can be found in any undergraduate textbook and should not really cause any difficulties. In econometrics it is common to write the regression model in matrix notation, where the matrix X is a n-by-k matrix. n is the number of observations you have and k is the number of regressors in your regression model. Many statistical tests (e.g. T-test, F-test, Wald-test) rely on the assumption that, as your sample size goes to infinity, the matrix Q = (1/n)*X’X converges to a finite nonsingular matrix. However, if one or more of the regressors in X contain a unit root, then matrix Q will become infinitely large (this is a consequence of the nonstationarity), so this assumption is violated. This is also the reason why the Dickey-Fuller test uses different t-statistics, instead of the ‘standard’ t-statistics. Another well known problem that can occur is when a variable with a unit root is regressed on another, completely independent variable with a unit root. This may lead to a phenomenon known as spurious regression. You’ll get the impression that you have fitted a perfect model, where the regressor has a lot of explanatory power, but in fact the two variables are completely independent of each other. To test whether two variables with a unit root are related is by means of cointegration.

    @Bart

    You asked what the consequence is for the trend you calculated *if* the temp series contain a unit root. The answer depends on what you wanna do. If you just wanted to show the average annual increase between 1975 and 2009, then there is nothing wrong, but if you wanted to show that the underlying trend hasn’t changed your way of analyzing the trend is not very suitable. The reason is that OLS requires that you correctly model the underlying ‘data generating process’. This is a term statisticians use when they refer to the mechanism (or set of equations) which generated the data. If you don’t model this underlying mechanism correctly your estimates will likely be biased (know as omitted variable bias) and the test statistics will not have the right distributions under the null hypothesis. Now the model you estimate is:

    temp(t) = constant + beta*t + E

    where E is an error term. I think most of us will agree that this model is too simple to describe the underlying mechanism that determines temperature (and yes, I also think the random walk model is too simple). Moreover, the model you estimate does not contain a unit root, so *if* temp would have a unit root, then that would just be another reason why your model is misspecified. Now I know that ‘to model the underlying mechanism correctly’ is easier said than done, I actually find this the hardest part of statistical analysis. The model should be based on theory (physics, climate science, etc), statistics has no role in this part. Once a model for temperature has been proposed, statistics can be used to test this model. I would find it very interesting if you, or maybe someone else on this blog, could write down what determines global mean temperature in a set of equations. That way I could try to work out a statistical test to see if it fits the data.

    @Adrian Burd

    VS already discussed the Phillips-Perron test in his post, but maybe I can still clarify it a little bit more. Both the Dickey-Fuller test and the Phillips-Perron test require that the error term of the test equation is white noise. If the error term would contain autocorrelation, then it can’t be white noise. The two tests have different ways to deal with this problem. The DF-test adds lags of the dependend variable to the test equation until there is no more autocorrelation. The different selection criteria are different ways to determine the required number of lags. The ‘standard’ formula for the variance of the estimator will not work if there is autocorrelation in the error term. The Phillips-Perron test uses another formula to calculate this variance, however, this formula will only yield a variance, which is approximately correct. The larger the sample size the better the approximation. Whether a sample of 128 observations is sufficient is hard to tell.

    @Pat Cassen

    You’re right that if the data contains a unit root, then it could be that the data follows a random walk. Fortunatly we can test this with fairly simple statistical methods in several ways. I’ll only do one of them. Let’s first assume that the data is indeed following a random walk, which means we can write it as:

    y(t) = y(t-1) + E

    where E is white noise. Now simply estimate the following equation:

    y(t) = b*y(t-1) + e

    using OLS on the GISS temperature series ranging from 1880 to 2009. The
    estimate for b is ~0.92, which lies close to 1 and an R^2 of ~0.82. On the basis of this someone might think that the model fits the data quite well and conclude that the data is indeed following a random walk. But if you look at the residuals and test for the presence of autocorrelation you’ll get very strong evidence that the error term is autocorrelated. But this is clearly contradicting the random walk model, since this one assumes that the error term is white noise. So, on the basis of statistical testing, I would conclude that temperature is *not* a random walk. But remember, this does not mean that temperature doesn’t have a unit root!

    VS has written thousands of words here. I don’t think that anybody who would write that much would be 100% consistent all the time. I read his lengthy post as well and I would like to add that I do indeed get the same results and, given the statistical evidence, I would indeed sooner conclude that temp has a unit root than not.

  251. VS Says:

    Bart, thanks for the moderating, much appreciated :)

    Hi Pat,

    The whole ‘random walk’ thing is somewhat of a red herring. First of all, I said ‘statistically speaking’ a ‘random walk’, implying that we can model it as such. On March the 5th I stated:

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks.”

    I also wasn’t being accurate (e.g. D(GISS-all) is, for example, much better described as an AR(3) process, but we’ll get to that later, I promise). My first post actually relates to what Alex posted just above. Once the series contains a unit root, regular inference, including OLS trends with confidence intervals, is invalid.

    What Tamino apparently told Bart, namely that a unit root ‘doesn’t really matter’, is simply false. I don’t know if Tamino is simply unaware of this or if he is misleading people on purpose because most of his blog entries rely on the assumption that unit roots ‘don’t really matter’.

    As for the trend: you can indeed calculate an average increase (although, a simple arithmetic mean would suffice). Calculating confidence intervals (via OLS) however implies that the underlying data generating process (DGP) is in fact trend stationary. I think I have shown this not to be the case. Hence, the intervals are meaningless, and any implications on how the ‘trend is changing’ are spurious.

    Bart, you stated on March 14, 19:24:

    “I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998″ claim.”

    While I’m not claiming that temperatures are a simple ‘random walk’ (again, thanks Alex for clearing that up, it got lost in the debate), I am claiming that the series contains a unit root.

    In that sense, I will definitely join you in fighting the erroneous ‘cooling trend’ claim. However, you guys have to quit blindly calculating trends too, for the very same reasons :)

    Anyway, Patrick, the purpose of the lengthy post is two-fold.

    (1) We are basically, slowly but surely, reproducing the results in the literature leading up to the BR publication discussed above. Once we have established temperatures to contain a unit root, and greenhouse gas forcings to contain two unit roots (also widely reported in the literature), we are set to evaluate the cointegration analysis proposed by BR.

    One thing at a time though.

    (2) The post is also a lenghty reply to Tamino’s post here:

    http://tamino.wordpress.com/2010/03/11/not-a-random-walk/

    There Tamino claimed that the GISS series does not contain a unit root. He furthermore claimed that I have not included a ‘trend’ term in my test equations, when arriving to my results (untrue, see lenghty post, almost all test equations include a trend term).

    He then proceeded to use the BIC/SIC, even though I claimed well before Tamino posted his ‘refutation’ (my post is dated March 8, Tamino’s blog entry March 11), that it isn’t appropriate in this case:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1287

    I cleary stated:

    “I just saw that I wrote in my first post that the lag selection in my ADF tests was based on Schwartz Information Criterion, or SIC. In fact, it was based on a related measure, the Akaike Informoation Criterion, or AIC.

    Using the SIC, which leads to no ‘lags’ being used, results in remaining autocorrelation in the errors of the test equation. That’s dangerous for inference.

    In the context of these temperature series, the AIC leads to 3 lags being employed, and successfully eliminates all remaining autocorrelation in the errors of the test equation (which has a deterministic trend as alternative hypothesis).

    Small issue, but I rather set it straight now, before somebody brings it up.”

    In his defense Tamino claimed that he had not read all my posts because I’m not ‘important’. This excuse doesn’t impress me much. If Tamino is the time series expert people apparently hold him to be, he would have checked all of that himself without being ‘called out’ (like I checked when I took the data the first time, and I’m not a TSA expert, I just had formal training in it).

    Apparantly he’s been staring at the same 120-something observations for the past few years without properly testing for a unit root. In the meantime, he has written a whole series of blog entries ‘explaining’ to people how calculate trends with AR terms.

    This is inexcusable

    It also clearly points to a lack of formal training in TSA. The very first thing you are taught to do is to thoroughly test for the presence of unit roots, for the very reasons outlined by Alex above.

    He then proceeds to show his readers the ONLY TWO instances in which the unit root is rejected, namely

    a) PP test, with a trend and intercept (or (3), using notation in my previous post)
    b) ADF test, with a trend and intercept, using a BIC/SIC to derive lag lenght

    He fails to mention that most (if not all) other test setups point to the presence of a unit root. To add insult to injury, he then proceeds to accuse ME of ‘cherry picking’.

    Finally, when a reader (JvdLaan) referred to my latest post (with all the test results), he replies:

    “[Response: Seriously, folks, VS and his theories don’t deserve the attention.]”

    May I point out that this comment was posted on a blog entry devoted entirely to ‘my theories’. As a side note, I would like to thank Tamino here for the confidence, but I honestly cannot claim to have invented unit root analysis.

    PS. Alex, your contribution is very much appreciated. I also suspected (and hoped) you would check my stats, since you were the first one to actually post results using different IC’s, on Tamino’s blog :) I’ll try to get back to your posts a bit later, I needed to set this straight first.

    PPS. I also clearly outlined, on March the 5th, why I don’t believe that Mann et al. proxy reconstructions can be used for the purpose of econometric inference. Here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1230

  252. Bart Verheggen Says:

    Dhogaza, Alex, VS,
    Thanks all for your latest construtive comments.

    VS, Tamino didn’t tell me that a unit root ‘doesn’t really matter’, but he did say that it affects the error of the trend more than the trend estimate itself. Which makes sense to me.

    Stereo,
    Leave the animals at home.

  253. VS Says:

    Hi dhogaza,

    Perhaps we can continue normally from here. You should take me off that ‘list’ of yours, though :P

    First off, that Jones quote, was more of a ‘statisticians joke’, and I really couldn’t help myself (I even apologized in advance :). The point is that Jones actually meant ‘significant at 5% significance level’. A signficance level of 95% is simply ridicilous. It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.

    I think Jones actually meant: “0 is an element of the 95% confidence interval for the calculated trend coefficient”. This would correspond to a signficance level of 5% being employed.

    But this is besides the point, it was a comic interlude :D

    You also asked me to test for a unit root in the past 30 years (i.e. the sample Bart used). I don’t think it’s relevant, because we have access to a longer series, so why throw away observations? Also, 30-something observations is really too few for the tests to function properly (small sample distortions really start creeping up on you with less than 50 obs). I guess that a 100 or so is the minimum in this case, but that’s just an educated guess.

    As for that quote, about ‘climate science’, I don’t think I insulted a field. I have pointed to a very influential paper on non-experimental hypothesis testing.

    Click to access the_probability_approach_in_econometrics.pdf

    Judging by what has been published w.r.t. statistical testing / regression analysis in the mainstream climate science literature, I firmly believe that that paper needs to circulate (no offense). I kindly invite you to read it, it was an eye-opener for me when I was an undergrad (perhaps one of the most influential in my decision to pursue a PhD), and Haavelmo even managed to fetch a Nobel prize (in part) for it.

    As for Tamino, my previous post explains my position in this situation.

    And finally, in case of Rabbett, he should first get the definition of an integrated process right, before butting into this discussion. Also, his post might have been ‘interesting’ if he had actually posted some test results, instead of ‘bunny hunches’. However, given that he clearly demonstrated that he’s unaware of what exactly we’re testing for (i.e. he defines an I(1)/I(2) process as ‘increasing as a first/second order function’), I wouldn’t hold my breath.

  254. VS Says:

    “It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.”

    Actually, in the context of the Jones quote, it implies that the relevant p-value is HIGHER than 95%.. :)

  255. Scott Mandia Says:

    I am glad to see the discussions are back on track.

    Alex: your comments have helped me greatly to understand the issue at hand.

    VS also told us to look at several Wiki sites. I found the image to the right on this link helpful:

    http://en.wikipedia.org/wiki/Unit_root#Unit_root_hypothesis

  256. RedLogix Says:

    Scott,

    For a control systems person that graphic you’ve linked to is very interesting.

    The ‘green line’ case is commonly encountered in what we call ‘integrating processes’, for instance, filling a vessel. A constant inflow into a tank (with no outflow changes) results in a steadily rising level. If the inflow is interupted briefly, ie turned off for a short period and then turned back on, the level rise will cease (or drop if there is outflow)… and then resume at it’s original rate when the inflow is restored. But the level will always be lower than if the interuption had not taken place.

    The ‘blue line’ case is even more common, for instance the postion of a valve that is controlling the flow in a pipe. If the valve is closed briefly the flow reduces, but when opened back to the original position the flow restores to the original flow.

    The difference between these cases is fundamental and comes about for purely physical reasons…. in the first case the vessel is acting as a physical integrator, while in the second there is no such mechanism in play.. only ‘return to equilibrium’ forces are in play.

    It occurs to me (at risk of seeming hopelessly naive) that the planet’s climate has elements of BOTH mechanisms in play, ie the ‘return to equilibrium’ behaviour that radiative transfer and energy balance demand, AND the presence of a huge thermal integrator, ie the oceans… acting on different time scales.

    Does this suggest a decent physical reason why this pesky ‘unit root’ can be extracted from the temperature record?

  257. VS Says:

    Hi Scott, RedLogix,

    I’m really happy you guys are diving into the matter. However, you ought to be careful when interpreting the implications of that figure! :)

    While a process containing a unit root indeed displays the depicted property (i.e. permanent effect of a ‘shock’) this doesn’t imply that these shocks just come and go ‘randomly’. That would indeed be in stark contrast with what we know about our climate.

    I replied to Arthur Smith a bit earlier (on March 10th) with this:

    “Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!”

    The main point is that once we have established that our series contain unit roots, we might nevertheless be able to ‘cointegrate’ them (note that multiple series can furthermore be cointegrated, e.g. vector cointegration).

    If we succeed, we have established an ‘equilibrium’ relationship between the variables in question. Note that these statistical modeling techniques allows for both runaway warming due to GHG forcings as well as negative feedbacks (so only ‘temporary effects’ due to forcings) or whatever else we find in our data.

    I think we’ll be able to get into that matter once we have established the presence of unit roots in various series we are investigating (e.g. solar irradiance, GHG forcings).

  258. Adrian Burd Says:

    Alex,

    Many thanks for your clear posting on unit roots and the tests for them.

    So here’s a question. What type of physical model would yield these results? I ask because it’s unclear to me if the way to interpret these statistical results is to re-examine the physical theory, or is it just a way of saying something about the noise in the data and that, statistically, we don’t have sufficient data to get a good signal over the noise.

    To put it another way, are these statistical tests telling us something profound about the physics of the system? Or are they telling us something about the signal to noise ratio in the data?

    My impression from reading what you and VS have posted is that it’s the latter. However, some of VS’s comments make me think it might be former.

    Adrian

  259. Rob Traficante Says:

    Hi Alex and VS

    Thanks for the discussion. I’ve no real idea about time series apart from what you might get in a first or second year of a maths degree, so all this is very helpful. Just to examine some of the points Dhog raised, I decided to run the tests on truncated series, starting with a series that spanned from 1880 to 1910 and, adding a year at a time, running through to a series that spanned from to 1880 to 2009. I know I’m bound to get significant results just by chance, but I was looking to see if there was some way to measure – I don’t know the phrase I’m looking for, but I’ll choose ‘consistency of effect’, with the ADF tests. Here’s what I used in R (I’m not terribly familiar with R, so please be kind) :

    # set up the end points: ADF test will use data from 1880 to startseries
    startseries <- 1910
    endseries <- 2009
    lengthseries <- length(c(startseries:endseries))

    # set up a null list to write the results too
    results.AIC <- list(endyr=rep(NA,lengthseries),pvalue=rep(NA,lengthseries),lags=rep(NA,lengthseries))

    # loop through, truncating the series at the specified year
    for (endyear in startseries:endseries)
    {
    temp.temp <- V2[V1<=endyear]
    test1 <- CADFtest(temp.temp,type="trend",max.lag.y=10,criterion=c("AIC"))
    lagnum <- test1$max.lag.y
    test2 <- CADFtest(temp.temp,type="trend",max.lag.y=lagnum,
    criterion=c("none"))
    results.AIC$pvalue[endyear-startseries+1] <- test2$p.value
    results.AIC$lags[endyear-startseries+1] <- test2$max.lag.y
    results.AIC$endyr[endyear-startseries+1] <- endyear
    }

    That’s is my version of ‘brute force programming’! In the syntax, assume V1 is the year in the GISS data, and V2 is the temperature. I ran a similar loop for the BIC, HQC and MAIC criteria (which is all the R add on has). BIC rejects the null consistently after, say, the 1880-1920 truncated series. HQC rejects the null for most truncated series except for when the end year is 2005, 2006, 2007, 2008 or 2009. MAIC almost always fails to reject the null. AIC is more difficult to describe – it rejects the null for most truncated series up to 1960, but then mainly fails to reject except for some particular series (those where I truncate at 1985, 1986, 1992, 1993, 1994, 1999 and 2000). If I’ve done this correctly (by no means a given), what would be your interpretation, if any, of the 'consistency' of these tests, or am I just data-mining to death?

    Interestingly, with the truncated temp series 1880-2000 (which is the period BR use for their equation (2)), it’s possible we would reject the null using the ADF test.

    I could use a similar approach truncating ‘from the left’ as it were, eg dropping 1880, then 1880-1881, then 1880-1882, and see how the test results look. Any point in doing so…?

    Regards
    Robert

  260. dhogaza Says:

    First off, that Jones quote, was more of a ’statisticians joke’, and I really couldn’t help myself (I even apologized in advance :). The point is that Jones actually meant ’significant at 5% significance level’. A signficance level of 95% is simply ridicilous. It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.

    It is common in science to state that p = 0.05 is equal to a 95% confidence level.

    In this case, 1995-present gives p = 0.076, at least according to one person who computed it. Or a 92.4% confidence level, as it is often put.

    1994-present gives p < 0.05 using HadCRUT, which is

    1. why the skeptic-provided question used 1995 as the start point (if you poke around you'll see folks like Lindzen and Motl suggesting the use of 1995 because it's the *latest date* for which the trend is not significant)

    2. why it's cherry-picking and dishonest, because in a field where 30-year trends are considered minimal for meaningful discussion, it's a bit surprising that even 1994-present yields a statistically meaningful trend.

    and of course we'll probably both agree that Jones dropped the ball in his answer, leaving himself open to blatant quote-mining.

    He should've responded by not answering the question and simply stating "it's significant from 1994-present and all earlier dates going back to 1970". He should've recognized that the question was dishonest, and a set-up for future misrepresentation, and refused to play the game their way.

  261. dhogaza Says:

    You also asked me to test for a unit root in the past 30 years (i.e. the sample Bart used). I don’t think it’s relevant, because we have access to a longer series, so why throw away observations?

    Given that we have temperature reconstructions going back 400 years which are, according to the NAS, very reliable, which have narrow error bars compared to the reconstructions in the interval -400 to -2000 years, by choosing 1880 to present you’re already throwing away data.

    As to why start at the mid 70s, again, it comes from knowledge of the physics and from observations. We know that from the end of the LIA forward, temps have been roughly flat until the 1970s. We have a physical explanation – indeed, we *had* a physical *prediction* beforehand, which current observations support – for the same. Why treat it as a uniform system? Why not look at the time period of interest, the time period in which physics informs us that CO2 forcing should be strong enough for a trend to emerge from the noise?

  262. dhogaza Says:

    “Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!”

    It ain’t going to lead to a nobel prize in physics, trust me.

    As hard as it might be to believe … physical systems *do* exist.

    The myth of the mathematician “proving” a bumblebee can’t fly does apply here, on a couple of different levels.

  263. dhogaza Says:

    To put it another way, are these statistical tests telling us something profound about the physics of the system? Or are they telling us something about the signal to noise ratio in the data?

    My impression from reading what you and VS have posted is that it’s the latter. However, some of VS’s comments make me think it might be former.

    Or we can ask what the physics tells us about expectations over the last 130 years. We can ignore Milankovich cycles over such a short time frame so …

    We have TSI fluctuating in a way that’s not exactly random, but not cyclical to the point where we can predict solar output in advance over such a time frame.

    We have a significant negative forcing in the form of large volcanic events that spew stuff up into the stratosphere in an unpredictable way.

    We have periodic redistributions of ocean heat that affect temperature, again, not exactly random but the timing and magnitude are currently unpredictable except in the very near future (ENSO).

    Am I missing anything significant?

    So lots of noise, no long-term trend on this timescale, relatively short-term perturbations when TSI changes significantly for an extend time, or when volcanic activity is running a bunch of “heads” or “tails” in a row, etc. Meteorologists picked 30 years as a rule-of-thumb for climate analysis because such perturbations typically don’t persist for such long periods.

    I don’t think the statistical argument is terribly surprising in this situation.

    But if you think of the drunk walking from the lamp post starting in 1880 … when he began his walk, there was a light breeze from the right. By 1975 that light breeze had become noticeable, a stiff breeze, always from the right. She is now experiencing a gale force wind, and by 2100 will be experiencing Cat 3 hurricane wind, always from the right.

    A drunk walking in such conditions is going to veer left in response to the wind, and as it increases, further and further to the left.

    If a statistical analysis fails to capture the physical change in the system, it don’t mean the wind ain’t blowin’ harder and harder.

    It simply means that statistics is misleading us because the tests being used are too coarse to capture the change in physical system.

    It certainly doesn’t mean, as two economists claim, that “AGW is disproved”.

  264. dhogaza Says:

    Hmmm, looks like my drunken walker went through a sex change some time in the 20th century… :)

  265. Ron Broberg Says:

    VS, I wasn’t very happy with your summary. Despite having no knowledge of the statistics in question, I could see some dodginess there. Here is my summary of your post.

    ADF: Presence of unit root not rejected in 5 cases, rejected in 1
    KPSS: Stationarity (no unit root) rejected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root
    DFG: Presence of unit root not rejected

    All of the above are consistent with “no presence of unit root”

    ADF-DIFF: Clear presence of a unit root
    DFG-DIFF: Clear presence of a unit root

    The difference methods, OTOH, indicate “clear presence of unit root”

    So what is happening when we move from the data to the differences? Why are we see evidence rejecting unit roots in the first set and clear precence of unit roots in the second set? Is this an indication of weakness in the conclusions? In the methods selected? Of the noisiness of the data? Of my ignorance of the appropriate tests?

    I’m also interested in seeing a set of such tests running up the decades:
    (this is not a request – more like homework for me one day)
    1880-2010
    1890-2010

    1960-2010
    1970-2010
    1980-2010

    CO2 has not increased linearly over the 130 years in question. How do non-linear trends affect the tests used?

    Climate scientists do not believe that CO2 is the only forcing in effect. Do the tests in this post tacitly assume that it is?

    WUWT has posted a Scafetti piece claiming strong periodic function in global temperature. Can these tests be used against periodic “trends”?

    VS, I am intrigued. But glaringly aware of my ignorance.

  266. KenM Says:

    As to why start at the mid 70s, again, it comes from knowledge of the physics and from observations. We know that from the end of the LIA forward, temps have been roughly flat until the 1970s.

    We’ve just skewered Motl for cherry-picking his start time of 1995.
    We’ve just lambasted Goddard for cherry-picking the 1980-something start date for his “snow analysis.”

    VS does his analysis over the entire GISS data set, but you think ignoring the first 90+ years of that dataset is OK because a model created *after* 1975 predicted that things would change in the 70s?

    I find it hard to swallow that temperature is a random walk too. The random walk conclusion says more to me about the weakness of the statistical methods than it does about nature. But seriously – how can you suggest doing the same thing as Motl and Goddard with a straight face?

  267. dhogaza Says:

    VS does his analysis over the entire GISS data set, but you think ignoring the first 90+ years of that dataset is OK because a model created *after* 1975 predicted that things would change in the 70s?

    Yes, I do think it’s OK, because it might help us understand what’s going on with the dataset as a whole.

    The two economists who “disproved AGW” noticed the break in the data a few decades ago, too, and looked to see if such a break could be teased out statistically. I think they failed to do so because the data’s inconclusive in that timeframe, assuming they did their stats correctly. Thus my wondering what happens if you go back 400 years using the proxy reconstructions which, for that time frame, are not controversial and which have relatively tight error bounds.

    Looking at the last 30-40 years might tell us that even VS and Alex would accept that an OLS fit is valid over that time frame, even if they claim it’s possibly not valid over the entire data set.

    Take a look at Bart’s first post – he did an OLS fit 1975-on, did you complain then? VS is saying “an OLS fit 1975-now is possibly invalid because I show a unit root in the data 1880-present”. That’s not necessarily true. I asked “hey, how about the 1975-now data alone”. VS says he suspects there’s not enough data points to properly test. OK, perhaps that’s true.

    The difference between Steve Goddard, for instance, and what I’m suggesting is that I’m saying “look harder at the data to better understand it”, not “do this, not that, ignore the rest, and poof! Global warming is disproved” which is an accurate description of Goddard’s hand-waving.

    And remember that underlying this is a difference of opinion by Tamino, a Phd in statistics who’s entire professional career involves time series analysis, as I understand it, and an PhD in economics who uses statistics as a tool, but perhaps hasn’t as strong a theoretical background in statistics as does Tamino.

    Remember, VS is making an extraordinary claim – OLS fits to climate data can not be shown to be valid – and as with all such claims, requires extraordinary evidence since I’ve never seen statisticians outside economics make that claim.

  268. VS Says:

    Hi KenM,

    You mean this?

    http://tamino.wordpress.com/2010/03/16/still-not/

    I think I agree.

    Now he’s performing unit root tests on less than 34 observations. Those observations that fit his hypothesis.

    His ‘objective’ sample: 1975-2008.

    Here guys, plot it.

    -0.04
    -0.16
    0.13
    0.01
    0.09
    0.18
    0.26
    0.05
    0.26
    0.09
    0.05
    0.13
    0.26
    0.31
    0.2
    0.38
    0.35
    0.13
    0.14
    0.23
    0.38
    0.29
    0.4
    0.56
    0.32
    0.33
    0.48
    0.56
    0.55
    0.49
    0.63
    0.54
    0.57
    0.43

    Note that when we employ p lags in the test equation, we are using the last (n-p) of these observations, because the last p drop out (with a minimum of 1 dropping out).

    Note also that we need to estimate 3+p coefficients, and the variance of the regression, so 4+p parameters.

    So

    0 lags in test equation: we are estimating 4 parameters with 33 observations
    4 lags in test equation: we are estimating 8 parameters with 30 observations

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet:

    “The whole “unit root” idea is nothing but a “throw some complicated-looking math at the wall and see what sticks” attempt to refute global warming. Funny thing is, it doesn’t stick. In fact, it’s an embarrassment to those who continue to cling to it.

    But hey — that’s what they do.”

    This Grant Foster guy is amazing.

  269. dhogaza Says:

    No, VS, he’s throwing *your use of it* down the toilet.

    Again, the similarity to the myth of the mathematical proof that a bumble bee can’t fly is remarkable and informative.

    Now he’s performing unit root tests on less than 34 observations. Those observations that fit his hypothesis.

    It’s not “his hypothesis”, it’s standard climate science: recent decades fit a (nearly) linear model. The entire GISS series doesn’t. What is so hard to understand about that? There are physical reasons for the emergence of the CO2 forcing signal about that time: TSI variability went almost flat (simplifies things, don’t have to account for it), negative forcing due to industrial aerosol emissions dropped steeply in the industrialized west during the 1970s (another simplification), and of course most importantly CO2 emissions were rising exponentially leading to a linear increase in forcing.

    So the physical *prediction* is that CO2 forcing is increasing in linear fashion, its magnitude has become greater than fluctuations in TSI and would’ve outstripped negative forcing from industrial aerosols at some point but did so more quickly due to clean-air regulations in the west, etc.

    Therefore, we expect the climate response to this linear forcing to be linear as that forcing grows in magnitude to a sufficient level to overwhelm the variability of other natural forcings.

    In the 1970s scientists were arguing if we’d already reached that point, and if not, when we would reach it.

    They certainly weren’t arguing that climate was responding linearly to a much lesser CO2 forcing in the face of abnormally high TSI levels in, say the early 1900s.

    So you’re imposing an assumption of linearity over the entire timeseries that a) doesn’t represent the view of scientists in the field and b) has no physical basis.

    He claims that using an ADF test over a non-linear time series isn’t a good idea. I did a little googling and came up with this buried in a summary of statistical testing techniques:

    Disadvantage of the ADF test: lack power when the model
    specification under the alternative hypothesis is nonlinear (see
    Nelson and Plosser, 1982; Taylor et al., 2001; and Rose ,1988)

    Seems like Tamino’s telling the truth here.

    Well, the “alternative hypothesis” in this case is that the series from 1880-present is non-linear. If you think you’re breaking big news here, you’re sadly mistaken.

    Why have you adopted the hypothesis that the series 1880-present is closely matched by a linear model? What is your *physical* basis for doing so?

    You’re also ignoring what he says is actually most important, not the ADF test over the linear part of the recent climate record, but testing the data 1880-present taking into account CO2 forcing.

  270. VS Says:

    http://chart.apis.google.com/chart?chs=500×400&chf=bg,s,ffffff&cht=ls&chd=t:15.18,0.00,36.70,21.51,31.64,43.03,53.16,26.58,53.16,31.64,26.58,36.70,53.16,59.49,45.56,68.35,64.55,36.70,37.97,49.36,68.35,56.96,70.88,91.13,60.75,62.02,81.01,91.13,89.87,82.27,100.00,88.60,92.40,74.68&chco=0066ff

    Google chart: Tamino’s sample.

  271. dhogaza Says:

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet

    May I suggest you read his post again – this time for comprehension rather than to mine for bits to hang insults on?

  272. dhogaza Says:

    Google chart: Tamino’s sample.

    What about it? Looks pretty similar to the chunk Bart fit his OLS to in the first place.

  273. dhogaza Says:

    Just to be very clear for those who aren’t playing along at home (by reading Tamino’s post):

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet:

    “The whole “unit root” idea is nothing but a “throw some complicated-looking math at the wall and see what sticks” attempt to refute global warming. Funny thing is, it doesn’t stick. In fact, it’s an embarrassment to those who continue to cling to it.

    But hey — that’s what they do.”

    The “they” in this case is clearly “those who claim they’ve proven the physics behind AGW wrong via statistical analysis of the GISTEMP data”.

    Not:

    1. economics as a whole

    2. most specifically not the Nobel prize winner(s) alluded to by VS.

    VS: stop the BS, please. You’re smart. You know who Tamino was talking about when he made that statement.

  274. jfr117 Says:

    so what does this show – that we have a 40 year dataset that shows linear growth in temperature? that this 40 year period is warmer than….what?

    if we can’t compare the most recent data to pre-1975 due to nonlinearity, what can we conclude?

  275. dhogaza Says:

    so what does this show – that we have a 40 year dataset that shows linear growth in temperature? that this 40 year period is warmer than….what?

    Well, that starts getting us into the realm of hockey sticks, etc. I don’t see the point of that in this dicussion. But let me just say that one can study hockey sticks and ignore Mann and short-centered PCA or “regular” PCA. There are enough hockey stick reconstructions using different statistical analysis techniques to outfit an NHL team.

    if we can’t compare the most recent data to pre-1975 due to nonlinearity, what can we conclude?

    We can compare it, we simply can’t say “since VS assumes linearity, climate scientists must assume linearity, and once they do, we can say that AGW is disproved” :)

    Tamino does look at the entire dataset, but by incorporating CO2 forcing in his analysis – I suggest you read his latest post and, if you have questions, ask him directly.

  276. MP Says:

    @VS,

    If the test requires more data points, why not have a go at monthly data then? If global T it is a random walk should it not be a random walk on every time scale?

  277. VS Says:

    Hi jfr117, welcome back.

    The only thing shown here is that Grant Foster doesn’t know how to test his own hypotheses. Here’s a proper test for what Tamino and dhogazza are propsing (if that’s two different individuals, I’m starting to doubt).

    The Zivot-Andrews unit root test: http://www.jstor.org/pss/1391541

    With a Stata module here: http://ideas.repec.org/c/boc/bocode/s437301.html

    The idea is the following. We allow the series to display a structural break in the test equation. However, instead of ‘cherry picking’ we allow the estimation algorithm to determine the most probable location of this ‘structural break’, assuming it exists.

    The test equation then allows for this break, when testing for unit roots. That way we don’t have to depend on Grant Foster picking out our break and throwing away the data he doesn’t like.

    ———————–

    Zivot-Andrews unit root test for gisstemp_all

    ———————–

    We allow for all three possible alternative hypotheses. So we can allow for a structural break in:

    (1) Intercept
    (2) Trend
    (3) Both.

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in intercept
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.473 at 1987 (obs 138)
    Critical values: 1%: -5.43 5%: -4.80

    Conclusion: unit root NOT rejected.

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in trend
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.765 at 1977 (obs 128)
    Critical values: 1%: -4.93 5%: -4.42

    Conclusion: unit root NOT rejected.

    Allowing for break in both intercept and trend
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -4.410 at 1964 (obs 115)
    Critical values: 1%: -5.57 5%: -5.08

    Conclusion: unit root NOT rejected.

    Overview of endogenously determined structural breaks, with test equation containig:

    (1) Intercept: 1987
    (2) Trend: 1977
    (3) Intercept and Trend: 1964

    ———————–

    CONCLUSIONS

    ———————–

    By allowing for a structural break in both intercept and trend seperately, or both together, the null hypothesis of unit root presence is not rejected, in any instance.

    Conclusion: unit root GISS_all

    The results furthermore fall in line with my elaborate unit root testing results posted here:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1524

    Here’s my view on Tamino’s previous ‘cherry picking’ of IC’s and test results:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1550

    Grant Foster, stop pretending you are a statistician.

  278. dhogaza Says:

    I’m not tamino, and I don’t pretend to be a statistician.

    VS:

    Grant Foster, stop pretending you are a statistician.

    Back to insults, I see, you who would pose as being the victim of a bunch of insulting bullies …

    Fact is, he *is* a statistician, while you’re an economist who uses statistics as a tool. Whether or not “economist” should be considered a term of insult is an exercise left to the reader …

    What I see here, is VS skating by objections without answering them, then throwing more mathy-looking stuff at the wall, hoping it will stick.

    You haven’t addressed Tamino’s using CO2 as a covariate.

    You still haven’t addressed the physical evidence. The bumble flies …

  279. dhogaza Says:

    So we have a statistical test saying we can’t reject the null hypothesis that there is no structural break.

    And we have physics arguing – nay, *predicting* – said break.

    And our economics dude says the inability to reject the null hypothesis based on limited data allows one to state that AGW is disproven.

    Does that about sum up the absurdity?

  280. dhogaza Says:

    No, I read too quickly … your test allows for a structural break, but fails to reject a unit root.

  281. MartinM Says:

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in intercept
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.473 at 1987 (obs 138)
    Critical values: 1%: -5.43 5%: -4.80

    Since when did GISTEMP go back to 1850?

  282. VS Says:

    Hi MartinM,

    My dataset also contains HADCRUT and CRUTEM3, and a bunch of GHG forcings, which all go back to 1850.

    My observations are therefore ID’d from 1850 onward. Stata then returns the exact observation ID. This (obviously) has no impact on the estimation procedures, since GISS data are NA before 1880.

  283. dhogaza Says:

    What is the physical basis for assuming there’s a *single* structural break in the series, rather than, say, two?

  284. Bart Says:

    Keep it nice guys.

    Re-reading the very informative 2009 Ramanathan and Feng article I came across this relatively simple explanation of the earth’s radiation balance:

    So the process of the net
    incoming (downward solar energy minus the reflected) solar
    energy warming the system and the outgoing heat radiation from
    the warmer planet escaping to space goes on, until the two
    components of the energy are in balance. On an average sense, it is this radiation energy balance that provides a powerful constraint for the global average temperature of the planet.

    I.e. The global average temperature only changes over climatic timescales (multiple decades or longer) if there is an imbalance in the radiation budget. As is now indeed the case. Climate is to a certain extent deterministic.

  285. dhogaza Says:

    This guy, for instance, explores the data and suggests there are two.

    And there’s a known physical explanation for the structural break in the early 1900s …

    He’s a civil engineer with a masters, so I’ll concede that our economist probably has a stronger background in stats than him, just as VS should concede that Tamino has a stronger background in stats than VS.

    But it’s interesting.

    And unlike VS, he did simply *assume* any particular number of break points, he’s attempted analysis, at least.

  286. Bart Says:

    VS,

    The tests that Tamino did in his latest post seem to be the most relevant for the issue at hand:
    1) For the time period for which the trend is approximately linear
    2) Using the estimated radiative forcing instead of a linear trend
    Where 2) is obviously superior.

    I may have missed it, but using these criteria (esp the second one, though as dhogaza explained, there are good reasons to pick the timeframe from 1975 onwards (give or take a few years), what would you get as test results?

  287. KenM Says:

    Aren’t GHG forcing’s a function of the observed temperature? I mean, if the temp goes down for 10 years, scientists look at that and say, “hmmm – why’d that happen?” Eventually they decide on a cause, e.g. aerosols, and *then* they assign a forcing that explains the deviation of the temperature from an expected trend that’s also plausible with the proposed cause.
    If I’ve got that right, then isn’t using GHG forcings as a covariate to the CADF test kind of circular logic? If they got the forcings wrong because they assumed a trend that isn’t necessarily there, then isn’t using those incorrect forcings in the test wrong too?
    Not saying they got the forcings wrong, just saying using them as a covariate is wrong when the notion of a trend is being challenged.

  288. dhogaza Says:

    2) Using the estimated radiative forcing instead of a linear trend

    He corrected my misunderstanding, he’s using the net forcing from all sources (not just GHGs) that are used as forcing inputs to GISS Model E, as described here.

    Eyeballing, it looks like a bunch of the others might somewhat balance out, so I’m not sure it makes a huge difference vs. just using CO2 forcing.

  289. dhogaza Says:

    Aren’t GHG forcing’s a function of the observed temperature? I mean, if the temp goes down for 10 years, scientists look at that and say, “hmmm – why’d that happen?”

    Uh … no. CO2 forcing comes from radiative transfer physics, which I can pronounce but not do :)

    The temp drop from Pinatabu were modeled quite accurately *in advance*, i.e. after physical observations as to the amount of stuff ejected into the stratosphere were available, but before the (roughly) three year cooling period was over.

    Eventually they decide on a cause, e.g. aerosols, and *then* they assign a forcing that explains the deviation of the temperature from an expected trend that’s also plausible with the proposed cause.

    In the case of aerosol cooling from the 1940s or so until clean air laws made a change, you are *partially* right. They didn’t just “assign a forcing” to fit observations, they rather tried to calculate forcing and then compared to observations.

    If I’ve got that right, then isn’t using GHG forcings as a covariate to the CADF test kind of circular logic?

    The kind of circularity you mention would be useless, and unscientific, and why would one waste one’s time doing something useless and unscientific???

  290. S. Geiger Says:

    quick question of curiosity. With our varied temperature measurement networks we can get an idea of the ‘thermal content’ of the global system..which would seem to require both water and atmosphere (but mostly water). If we could very accurately measure this quantity would it be a smooth continuous function of time or would it bounce around (I guess due to the random element of ‘weather’). Are the bounces we see actual changes in the earths energy balance or just noise in our measurment network?

    BTW, I’ve greatly enjoyed this discussion…at some points it lives up to the potential for such blogs to provide a real forum for honest discussion….also kind of interesting how hard it is for some not to stray back into ad hom land…very emotional topic I guess.

  291. dhogaza Says:

    Here, for instance, you can see where the figures for various GHG historical concentrations comes from, along with the ranges they assign for the future (which of course can’t be precisely predicted because we don’t know what kind of response to the problem we’ll take).

    This data is tranformed into forcing figures using *physics*.

    So you can see that for GHG forcing, your assumption of circularity is false.

    There are other pages which describe the provenance of each and every forcing …

  292. KenM Says:

    Hi Dhogaza – I see the quoted text, where’s that from?

  293. KenM Says:

    doh! didn’t realize the whole thing was a link! I’ll check it out.

  294. dhogaza Says:

    If we could very accurately measure this quantity would it be a smooth continuous function of time or would it bounce around (I guess due to the random element of ‘weather’).

    Bounces …

    Are the bounces we see actual changes in the earths energy balance or just noise in our measurment network?

    There’s considerable uncertainty with the sea temp stuff, and the measurements are for surface and (relatively) near surface, there’s no time series of measurements for the ocean as a whole.

    BTW you’ve come perilously close to what Ternberth was talking about in the infamous “we can’t account for the missing warming”, as he was talking about not being able to account for excess energy in the system (determined IIRC by satellite measurements) because of inadequate observations of the entire atmosphere/ocean system.

    Let’s not go OT with that, but thought you might find it interesting…

  295. KenM Says:

    Didn’t take long to reject I’m afraid – those are predictions for models. the forcings (like the one Tamino used) are not predictions but ‘observations’. I say ‘observations’ in quotes because I understand that they are really best-guess approximations – i.e. there were no aerosol-measuring instruments collecting data in 1940.

  296. dhogaza Says:

    also kind of interesting how hard it is for some not to stray back into ad hom land…very emotional topic I guess.

    Well … VS is essentially repeating arguments made by a couple of economists in a paper in which they bluntly stated: “AGW is disproved”.

    You can see why that might annoy some people…

  297. KenM Says:

    In the case of aerosol cooling from the 1940s or so until clean air laws made a change, you are *partially* right. They didn’t just “assign a forcing” to fit observations, they rather tried to calculate forcing and then compared to observations.

    Agreed, but this does not mean they are correct. Plausible, certainly, but that’s not a good enough defense when someone comes along and argues that temperature in the last 100 years or so resembles a random walk. You can’t say their plausible theory is wrong because someone else’s plausible theory says they are wrong.

    Put it another way, let’s say for argument’s sake that the 40s aerosol levels were estimated incorrectly. Would you agree this is possible?

    If so, then you must also agree that using those forcings as a covariate in the CADF test *to prove* that the temps are not following a random walk is circular.

  298. VS Says:

    Hi KenM and Bart

    That regression (‘climate forcings act as trend’) is completely flawed statistically.

    GHG focings are found to be I(2) by everyone in the literature, i.e. the series contains two unit roots. Temperature has been found, and above yet again, as well as throughout the literature, to be I(1).

    Simply regressing temperatures on GHG forcings (or simple CO2 forcings) leads to spurious results. Tamino either knows this, and he’s misinforming people on purpose, or he doesn’t, and he’s incompetent.

    Hence the need for polynomial cointegration, employed by Kaufmann et al (2006) (but tested for incorrectly) and employed by Beenstock and Reigenwertz (tested for correctly).

    I quote, Kaufmann and Stern (2000):

    “The univariate tests indicate that the temperature data are I(1) while the trace gases are I(2). That is, the gases contain stochastic slope components that are not present in the temperature series. This result implies that there cannot be a linear long-run relation between gases and temperature.”

    And I quote Beenstock and Reigenwertz:

    “The order of non-stationarity refers to the number of times a variable must be differenced (d) to render it stationary, in which case the variable is integrated of order d, or I(d). We confirm previous findings [refs] that the radiative forcings of greenhouse gases (C02, CH4 and N2O) are stationary in second differences (i.e. I(2)) while global temperature and solar irradiance are stationary in first differences (i.e. I(1)).

    Normally, this difference would be sufficient to reject the hypothesis that global temperature is related to the radiative forcing of greenhouse gases, since I(1) and I(2) variables are asymptotically independent [ref]. An exception, however, arises when greenhouse gases, global temperature and solar radiation turn out to be polynomially cointegrated [ref]. In polynomial cointegration the greenhouse gases that are stationary in second differences must share a common stochastic trend, henceforth the “greenhouse trend”, that is stationary in first differences. If this “greenhouse trend” exists and if it is cointegrated with global temperature and solar irradiance, we may conclude that greenhouse gases are polynomially cointegrated with global temperature and solar irradiance.”

    What Tamino wrote down is complete nonsense, statistically speaking. Apart from the issue illustrated above. He furthermore

    1) didn’t test his hypothesis properly,
    2) used a way too short and specifically selected sample
    3) again disregarded the presence of TWO unit roots in GHG forcings series (all of them, I’ll post test results)

    ————————

    Again, because the posts keep getting spammed away by certain individuals who feel they need to share they each and every thought and ‘hunch’ with us as soon as it pops up in their head.

    **Unit root analysis, including all test results and motivations. Conclusion, GISS-all is I(1):

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1524

    **My view on Tamino’s cherry picking of test results, in his ‘Not a Random Walk’ (strawman) blog entry:

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1550

    **Graph of sample employed in Tamino’s latest ‘analysis’

    Link to sample:
    http://chart.apis.google.com/chart?chs=500×400&chf=bg,s,ffffff&cht=ls&chd=t:15.18,0.00,36.70,21.51,31.64,43.03,53.16,26.58,53.16,31.64,26.58,36.70,53.16,59.49,45.56,68.35,64.55,36.70,37.97,49.36,68.35,56.96,70.88,91.13,60.75,62.02,81.01,91.13,89.87,82.27,100.00,88.60,92.40,74.68&chco=0066ff

    [Note that the alternative hypothesis set by Tamino, which is set against the unit root null hypothesis, is that the series has a straight linear trend. Convenient sample choice, no?]

  299. VS Says:

    Here are the test results for GHG forcings:

    I also reproduced the findings of Kaufmann (various papers) and Beenstock and Reigenwertz, concering the I(2) property of Co2 forcings. I downloaded the data from GISS-NASA, here:

    http://data.giss.nasa.gov/modelforce/ghgases/GHGs.1850-2000.txt

    I furthermore transformed the ppm series into forcing, following Kaufmann, Beenstock AND wikipedia.

    F_CO2=5.35*Ln(CO2_ppm/285.2)

    As before, we can proceed to test for unit roots. I will test against the alternative hypothesis of a trend and intercept. Note however that the results hold (arguably more firmly) when testing against alternative hypotheses (1) and (2) (listed in my lengthy post above).

    ————————–

    AUGMENTED DICKEY FULLER TESTING

    ————————–

    As before, we first test the level series. In contrast to the temperature series, the IC’s all deliver the same results. I will employ the three standard ones in this case, namely the AIC, BIC/SIC and HQ.

    Level series, F_CO2, ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 12
    p-value: 1
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (!! the errors have a mad kurtosis, over 5)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 7
    p-value: 0.9987
    Conclusion: presence of unit root not rejected
    JB: 0.000003 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 7
    p-value: 0.9987
    Conclusion: presence of unit root not rejected
    JB: 0.000003 (same as above)

    First difference series, D(F_CO2), ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 7
    p-value: 0.6764
    Conclusion: presence of unit root not rejected
    JB: 0.000000 (same as above)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 6
    p-value: 0.7871
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 6
    p-value: 0.7871
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (same as above)

    Second difference series, D(F_CO2, 2), ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    So, if these test results are to be trusted, we cannot reject the presence of a unit root in the level series (so no I(0)), likewise, we cannot reject the presence of a unit root in the first difference series (so no I(1)). However, the presence of a unit root in the second difference series, is clearly rejected, so we conclude that CO2 GHG forcings, are I(2). In other words, they need to be differenced twice in order to obtain stationarity.

    I have to note here that normality of the errors of the test equation are rejected in all instances. This implies that the ADF test is not exact. This might be a bit problematic for inference.

    So let’s consult other tests.

    ————————–

    KWIATKOWSKI-PHILLIPS-SCHMIDT-SHIN TESTING

    ————————–

    Just as before, we now take stationarity to be the null hypothesis, and employ the KPSS test. The asymptotic critical values of the test statistic are, again:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    As before, once the value of the test statistic exceeds one of the above values, stationarity is rejected at that significance level. I will only report the Bartlett kernel method results, because I’ve read yesterday (while doing the KPSS tests) that this approach is most stable in small samples. The results however also hold for the Perzen kernel (in fact, they are even more solid).

    Level series, F_CO2, KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.291981
    Conclusion, stationarity is rejected at all significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.173808
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    First difference series, D(F_CO2), KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.253348
    Conclusion, stationarity is rejected at all significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.244655
    Conclusion, stationarity is rejected at all significance levels.

    Second difference series, D(F_CO2, 2), KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.021613
    Conclusion, stationarity is NOT rejected at any significance level.

    Andrews bandwith selection:
    TEST STATISTIC: 0.031438
    Conclusion, stationarity is NOT rejected at any significance level..

    Applying the KPSS test, we again confirm that CO2 GHG forcings are I(2).

    ————————–

    PHILLIPS-PERRON TESTING

    ————————–

    The PP test, again, is the odd one out. When testing the level series, we cannot reject the presence of a unit root.

    Phillips-Perron test on Level series, F_CO2
    Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC 3.218009 (p-value: 1)

    1% level, -4.020396
    5% level, -3.440059
    10% level, -3.144465

    Conclusion, presence of unit root is not rejected

    Phillips-Perron test on First difference series, D(F_CO2)
    Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC -6.398659 (p-value: 0.000)

    1% level, -4.020822
    5% level, -3.440263
    10% level, -3.144585

    Conclusion, presence of unit root is rejected

    So following the PP test, we find that the R_CO2 series is in fact I(1). This test, AGAIN deviates from all the other tests. However, for what follows, it still arives at CO2 forcings being I(d+1) vs I(d) related to temperature series. This in its turn, corresponds to the relationship determined by other tests.

    ————————–

    DICKEY FULLER GENERALIZED LEAST SQUARES TESTING

    ————————–

    In order not to bore you with yet more test results. The DF-GLS test clearly indicates that the F_CO2 variable is in fact I(2).

    ————————–

    SUMMARY AND CONCLUSIONS

    ————————–

    We find, in line with what both Kaufmann and Beenstock have found, that there is strong evidence to suggest the presence of two unit roots in the F_CO2 series. Rabbett posted something on his blog about a structural break in the series (although he completely misspecified the nature of that hypothesized ‘break’).

    In this case, allow me to cite the results of BR.

    They tested whether the F_CO2 series is in fact a I(1) series with a structural break in 1964. I quote:

    “We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998)
    (CMR). The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.”

    I have to note they ‘express’ themselves slightly incorrectly here. What they are saying is that the null hypothesis, of there being NO break, is not rejected (i.e. the test statistic does not exceed the critical value).

    They furthermore proceed to test all GHG focings for their I(d) properties, and report:

    “We have applied these test procedures to the variables in Table 2. It turns out that the radiative forcings of all three greenhouse gases are I(2).”

    These are their findings, in Table 2 (for the record, I used the 1850-2000 data):

    rfCO2, I(2), 1850-2006
    rfCH4, I(2), 1850-2006
    rfN2O, I(2), 1850-2006

    I think we can trust these results (although, if someone is particularly sceptical, we can run them as well).

  300. dhogaza Says:

    Didn’t take long to reject I’m afraid – those are predictions for models. the forcings (like the one Tamino used) are not predictions but ‘observations’.

    Look more closely, the historical data is blue in the graphs, the projections for various scenarios (i.e. how we control emissions) are the yellow part.

    Which values get used? Depends on what time frame the model is being used to explore. They want to run improved versions of the model against past forcing data to see if they do a reasonably good job of matching historical climate trends.

    I say ‘observations’ in quotes because I understand that they are really best-guess approximations – i.e. there were no aerosol-measuring instruments collecting data in 1940.

    That’s a real problem for *some* forcings, not for all. There’s good proxy data for some, not all, good historical measurements for some things, not others.

    Aerosols are a problematic one, AFAIK you are right about that.

    However before scrubbers etc were introduced, there is good economic data, and a lot of dirty industries didn’t change particular much say 1920s(?) until when air quality measurements were being done regularly. So I would assume that if you know a particular technology for producing X by burning coal produces Y lbs of CO2 for each lb of X in 1950, you can probably work backwards to how much CO2 was produced by that particular production technology in the 1920s or 30s…etc.

  301. dhogaza Says:

    Agreed, but this does not mean they are correct. Plausible, certainly, but that’s not a good enough defense when someone comes along and argues that temperature in the last 100 years or so resembles a random walk. You can’t say their plausible theory is wrong because someone else’s plausible theory says they are wrong.

    No, we say their theory is implausible because it’s *unphysical*, and can then try to figure out where they’ve made their mistake.

    Like the myth of the mathematical proof that a bumble bee can’t fly.

  302. dhogaza Says:

    Put it another way, let’s say for argument’s sake that the 40s aerosol levels were estimated incorrectly. Would you agree this is possible?

    If so, then you must also agree that using those forcings as a covariate in the CADF test *to prove* that the temps are not following a random walk is circular.

    The net forcing is dominated in recent decades by CO2 and we have good figures for that going back to the 1950s, before the period of concern, and proxy info before that.

    Really, the CO2 data isn’t controversial.

    And outside a handful of economists who think they’re going to trump a whole lotta physicists, the fact that recent decades of warming closely fit a linear model is wholly uncontroversial.

  303. dhogaza Says:

    GHG focings are found to be I(2) by everyone in the literature, i.e. the series contains two unit roots. Temperature has been found, and above yet again, as well as throughout the literature, to be I(1).

    Simply regressing temperatures on GHG forcings (or simple CO2 forcings) leads to spurious results. Tamino either knows this, and he’s misinforming people on purpose, or he doesn’t, and he’s incompetent.

    Or he didn’t use GHG forcings, which happens to be true (my bad for saying so).

  304. VS Says:

    Bart,

    Hij is echt FLINK aan het spammen. Doe er echt wat aan.

    Hoeveel posts heeft hij vandaag geschreven? Wat heeft hij daar precies in gesteld? Bewijs? Referenties? Formele bewijzen?

    Dit is toch niet ghogaza’s climate-Twitter pagina?

    De hele discussie raakt erdoor vervuild. Mijn resultaten worden zonder argumenten ondergekotst.

    Waar slaat dit echt op?

    [If you have a message for me alone, you can email me via the link on the right. Otherwise, comment in English on an English thread. Dhogaza has brought many good arguments to the table, and his total wordcount is still way below yours here. BV]

  305. S. Geiger Says:

    Moderator/Bart – any chance you could broker a deal to have VS and Tamino ‘debate’ this issue(sans all other posters, save perhaps Alex, who stays on point and w/out any personal attacks) in a new thread. You seem to get along with Tamino…and allow VS to post on your site so maybe this is possible. Maybe you could set up a few ground rules that they could both agree to beforehand(?).

    Thanks

  306. dhogaza Says:

    Yet … yet … the bumble bee flies …

    Really, it’s the ultimate in hubris to think that statistical tests by economists disprove physical theory.

    CO2 absorbs LW IR, we know that, there’s tons of observational data backing up AGW theory, etc etc.

    B&R are essentially demanding that a large percentage of known physics be thrown in the toilet. Not only is AGW not real, but there’s a very good likelihood that they’ve just proven that airplanes don’t fly …

  307. dhogaza Says:

    And, VS, if you’re going to insult me while victim-pleading about other people insulting you … knock off the secret decoder-ring messages to Bart and post in english:

    He is really FLINKE to spamming. What to do really.

    How many posts he has written today? What exactly has he made? Proof? References? Formal proof?

    It’s not climate-ghogaza’s Twitter page?

    The whole discussion becomes contaminated through. My results are no arguments, puked.

    Where does it really?

  308. dhogaza Says:

    Moderator/Bart – any chance you could broker a deal to have VS and Tamino

    Tamino doesn’t seem to think VS is worth the effort, and the paper that VS is essentially using to build his case hasn’t gotten traction in the real world, so I can’t say I blame him.

  309. MartinM Says:

    Here are the test results for GHG forcings

    What about the net forcings, as per http://data.giss.nasa.gov/modelforce/NetF.txt ? A quick look in R seems to rule out a unit root in that series.

  310. dhogaza Says:

    B&R has some stuff in it that’s very interesting …

    If instead of a permanent increase in its level, the change in rfCO2 were to increase permanently by 1 w/m^2, global temperature would eventually increase by 0.54 C.

    If the level of solar irradiance were to rise permanently by 1 w/m^2, global
    temperature would increase by 1.47 C.

    This should give an idea as to HOW MUCH PHYSICS MUST BE THROWN OUT if B&R’s analysis is correct.

    Physics doesn’t differentiate between different sources of energy. Add 1w/m^2 of energy, and the climate will respond the same way regardless of the source. B&R have just invented “smart energy” …

    I don’t have the statistical background to refute them, but … I know that quoted statement is false.

    Again, the bumble bee flies …

  311. dhogaza Says:

    What about the net forcings, as per http://data.giss.nasa.gov/modelforce/NetF.txt ? A quick look in R seems to rule out a unit root in that series.

    It’s net forcing that matters, despite B&R’s belief that 1 w/m^2 is different depending on the source of that 1 w.

    VS: how can you place faith in a paper that trumpets such an obviously unphysical conclusion?

  312. dhogaza Says:

    VS, I must thank you because I hadn’t bothered to look at B&R before and I’ve read it through.

    There are other insanely non-physical conclusions they draw from their analysis, for instance doubling of CO2 won’t lead to a permanent increase in temperature.

    Apparently the CO2 forcing must “wear out” somehow, because as time goes on, it apparently ceases to absorb LW IR.

    And somehow, everything we know as to why the earth’s not a frozen ball roughly 33C colder than it is today is false.

    You’d be better off spending your time figuring out where B&R have gone wrong, rather than continue your argument based on their results.

    They really are arguing that, in essence, bumble bees don’t fly.

  313. Tim Curtin Says:

    dhogaza (aka bumblebee) contests the B&R statement that “If instead of a permanent increase in its level, the change in rfCO2 were to
    increase permanently by 1 w/m2, global temperature would eventually increase by 0.54 C. If the level of solar irradiance were to rise permanently by 1 w/m2, global temperature would increase by 1.47 C” by saying “Physics doesn’t differentiate between different sources of energy. Add 1w/m^2 of energy, and the climate will respond the same way regardless of the source”. Granted, the G&R statement is poorly worded, they can hardly mean that the change in rfCO2 would increase by 1W/m2 p.a. or that IR would or could rise by 1 W/sq.m p.a., it still behoves us all to recognise that changes in IR as measured at top of the atmosphere would have the same effect at ground/surface level as changes in [CO2] and other GHG in the troposhere also at ground/surface level. What are the surface level forcings of both CO2 and IR? – that is what one supposes the physics is about.

  314. dhogaza Says:

    Someone want to translate Tim’s post into plain english?

    I think B&R’s statement stands on its own, and see nothing that indicates that they’re using radiative forcing in a non-standard way (which is the only way their statement could be true).

  315. Ray Says:

    VS, Bart, Tim et al. Great dialog. Thanks.

  316. Tim Curtin Says:

    Dear Bumblebee (aka Dhogaza), apologies, let me try again. TSI actually measures the sun’s output, but what reaches say Alaska or Hawaii in the form of solar surface radiation is quite different, whatever Hansen & Sato or you believe. For example, at Barrow in July 2066 SSR was 57.22 W/sq. m, and at Hilo it was 95.48, while TSI was about 1365.5 W/sq.m (Hansen Sato Ruedy Lo 2009, Fig.4). This may have had something to do with the different mean daily temperatures in Barrow & Hilo in July 2006, 3.86oC and 24.53oC respectively, despite identical RF from [CO2].

    This suggests to me that cointegrating the GISStemp global means for July with TSI and the atmospheric CO2 level (which is the same at Barrow & Hilo) may be missing something. The same applies to all other months of record at Barrow & Hilo, and at some 1200 locations for which SSR data are available for 1990 to 2006 (and 1960-1990 for some 250 of those) from NOAA. What stops Hansen’s GISS from gridding local SSR to get global mean SSR by month in W/sq.m? In my view the variations in TSI reported by Hansen (2009) from 1366.5 W/sq.m in 1980 to 1365.2 W/sq.m in 2008 are less likely to impact on the GISS Mean Temps for those years than the variations in SSR at say Barrow from July 1980 (88 W/sq.m) to July 2006 (95.48) and everywhere else in the GISS grids from which it derives its global mean (I realise the dates are not identical but no doubt bumble can link me to monthly TSI).

    I can translate into Dutch if that would help (if I have a 2nd language that’s it from my schooling in Afrikaans).

  317. PeterW Says:

    [edit. Keep it polite. BV]

  318. Tim Curtin Says:

    Apologies to all: I left out the word not in my last but one which should have read “…it still behoves us all to recognise that changes in IR as measured at top of the atmosphere would NOT have the same effect at ground/surface level as changes in [CO2] and other GHG in the troposhere also at ground/surface level”. This was I trust made clear from my next posting.

  319. Bart Verheggen Says:

    KenM,

    The climate forcings are calculated based on physics; they are not derived from the temperature trend.

    VS,

    You reply at March 16, 23:00 doesn’t convince me. If the goal is to see whether the temperature is forced (deterministic) as opposed to random (stochastic), than the best trend estimate is to take the estimate of the net radiative forcing. Using a fit to only the CO2 forcing instead will lead to inferior results in comparison. I see that later (23:02) you used the actual CO2 forcing instead of a two line fit, which is an improvement of course. But it’s still inferior to using the net radiative forcing, for which esp the aerosol component is very important.

    Dhogaza and myself both explained why it’s appropriate to use the period after 1975: That’s when the GHG forcing is dominant and the expected trend in temperature would be roughly linear (and linearity was an assumption in the simplest form of the ADF tests, as I came to understand). Plus, that’s what got this discussion started, as it is the period I used for a regression in the head post (last figure).

    You may be annoyed with certain people; others may be annoyed with you. As long as everyone remains some civility we can continue the discussion, but whether behaviour crosses a line or not is in the eye of the beholder. And on my blog, I decide where the line is.

  320. VS metrics Says:

    Bart,

    I responded to the post-1975 concern well before Tamino posted his results. I also performed a set of separate statistical tests to test this ‘hypothesis’, properly. Note that you three are in fact disputing a whole body of published literature here, while acting as if I’m the one making the ‘extraordinary’ claim.

    I understand Tamino and his sidekick are your virtual friends, and this is your blog.

    Therefore do as you like and moderate as you wish. The readers will judge you for how you’re doing that, and considering the thoughtfulness of many of the contributors here, I have no doubt they will draw the right conclusions.

    ————–

    For all those joining the discussion and can’t find my test results and arguments, [edit] here’s a quick overview of the current statistical discussion, especially relating to the posts made by [edit] Tamino.

    ** The whole discussion started on March 5th, with my comment here.

    My claim there, that you cannot simply calculate deterministic trends via OLS and report confidence intervals, rests on the I(1) property of temperature series (i.e. the series contains a unit root), which has been widely established in the literature. Tamino seems not to have read any of these papers, as he’s not disputing e.g. Kaufmann’s work explicitly, only implicitly.

    ** A whole debate then ensued, and at one point, Tamino made this post, misstating my position, and denying the presence of unit roots (i.e. stating the series is not I(1), but I(0)) in the GISS data, here.

    ** The definition of a unit root is given here:

    ** I then perform my unit root analysis, and find that in only two of the many different set ups, the unit root is actually rejected. I report all test set ups, motivations and test results here.

    ** I explain the exact nature of the cherry picking performed by Tamino here.

    Note that Tamino posted the only two test results which agree with his hypothesis, while ignoring the vast majority of indicators pointing to the presence of a unit root:

    ** Tamino then responding with this new post, where he picks 34 observations and starts performing statistically invalid analyses, here.

    ** I comment on the impact of his sample size/test procedure, here.

    ** I then propose (and perform) the appropriate unit root test (Zivot-Andrews unit root test), and test for the presence of a unit root, allowing for the hypothesized structural break in the series. This was proposed by some here, and is the ‘basis’ of Tamino’s post.

    This breakpoint is determined endogenously by the test/data, and doesn’t require us to throw away 3/4 of our observations and/or cherry-pick our breakpoint.

    This is in stark contrast to what Tamino does, namely: handpicks 30-something obs, ‘shows’ unit root analysis then ‘doesn’t work’ (good morning, Columbus) and then proceeds to perform a spurious regression (i.e. I(1) var on I(2) var, ignoring a total of 3 unit roots there) to ‘prove his point’.

    Note that, when properly tested via the Zivot-Andrews test, the presence of a unit root is again not rejected:

    ** Finally I argument why the regression performed by Tamino in his latest post is invalid, referring to published literature (i.e. the I(1) vs I(2) properties of Temp and GHG’s respectively).

    Those interested in a texbook treatment of where Tamino messed up, are referred to Davidson and MacKinnon (2004), page 609, section “Regressors with a Unit Root”. Bart, you stated you are not ‘be convinced’ by my reply. Davidson and MacKinnon offer a formal proof of it, on page 610.

    My post related to this issue, is given here

    Here I’m reproducing the test results for unit root presence in the CO2 forcings, showing that the series are indeed I(2), or have two unit roots. References for other GHG’s are also listed.

    Also, for the record, here’s a plot of Tamino’s ‘cherry picked’ 35 observation sample. Note that he set his alternative hypothesis to be a straight linear trend. Very convenient if you want to reject the null hypothesis of a unit root (not that the tests are valid with so few observations, and so many parameters to estimate).

    ————–

    IMPORTANT:

    **I’m not ‘disproving’ AGWH here.
    **I’m not claiming that temperatures are a random walk.
    **I’m not ‘denying’ the laws of physics.

    *****These are all strawmen, posted by Tamino’s (admittedly statistically illiterate) ‘fan base’ here, in an effort to dillute my argument, and make my contributions unreadable.

    All that I am doing is establishing the presence of a unit root in the instrumental record. The presence of a unit root renders regular OLS inference invalid. Put differently, you cannot simply calculate confidence intervals assuming a trend-stationary process, because the temperature series is shown to be non-stationary (i.e. contains a unit root).

    Alex gives the technical reason why OLS inference is invalid in the presence of a unit root. This concerns non-singularity/finiteness of lim n->Inf, Qn=(1/n)*X’X matrix (‘consistency’, or ‘raakheid’ in Dutch, of the t and F based tests demands Qn to be finite/non-singular in its limit). In case of unit root(s) somewhere in X, Qn is infinite in its limit. This is a violation of one of the assumptions of OLS-based testing.

    Here’s Alex’s post:

    These findings, i.e. the unit root in Temperature series, have also been reported numerous times in the published literature that I have surveyed:

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Hey et al (2002)
    – Confirm presence unit root in temp seriest, I(1)
    ** Kaufmann et al. (2006)
    – Treat the temperature variable as I(1)

    Unpublished

    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    There are more.

    All more or less confirm my results (and are in contrast to Tamino’s). Note that all authors that check, also confirm that all GHG’s are I(2).

    ————–

    I will respond to comments regarding the statistical analysis here. We can discuss the physical implications once we have established the validity of my test results/set ups (which are disputed by Tamino). I’m not avoiding that discussion though, and am very interested to engage in it. However, I first need to make my statistical point clear.

    I have seen a couple of interesting posts, that I would also like to continue on once the statistical results are dealt with, like for example that of Rob Traficante yesterday.

    ————–

    VS

    PS. Just for the record. People, statistics is a formal discipline. If you have a ‘theory’ on how a certian estmator is going to behave in a certain situation (‘nonlinearity’ of trend, or whatever else you think up together, rather than test), come with either a formal derivation, monte carlo simulation results, or a reference.

    I guess the same holds for physics. I’ve seen a lot of ‘it contradicts the physics’ handwaving, but no proofs. I don’t understand how this doesn’t annoy the physicists reading this thread [edit]. Simply stating your opinion on how the physics is not in line with the statistics in half a paragraph is not sufficient to prove said point.

  321. Tim Curtin Says:

    Bart: do you have data for the aerosol components of net forcing at the actual locations where GISS temps are measured before being gridded and amalgamated? If not, how do you explain these my latest regression results for Pt Barrow in Alaska? The model uses absolute values, not 1st differences, but with the Wiki definition for RF.

    Mean temperatures July 1960-2006, Pt Barrow
    Model Summary
    Model AdjR2 SEE Durbin-W
    1 .962 .86175 2.029

    Coefficients t Stat P-value
    Constant=zero.
    RF [CO2] -0.903 -3.1539 0.0029
    Solar SR 0.00051 2.414 0.0199
    H2O 4.896 7.025 1.0476E-08

    All are significant at better than 95%, but the RF is negative! The “H2O” (precipitable water in cm. according to the NOAA data source) seems to be decisive. The DW indicates absence of spurious correlation. Not much strong sun at Barrow even in July with ave T of around 3oC.
    BTW, the semi-log growth rate of mean temperature in July at Barrow from 1960 to 2006 was 0.071339% p.a.. Projecting to 2100 at that rate we get from 3.86oC in 2006 to 4.127oC, a rise of 0.2666, or 0.027oC per decade. Is that enough to wipe out all polar bears?

    [Reply: global vs local. BV]

  322. jfr117 Says:

    i have to say that i find it frustrating when the argument that vs is in economics and not a statistician, physicist, etc is used. what we all should be seeking is the truth – no matter who or what discipline it comes from. i’ll be honest here, part of what i find very frustrating is the real or perceived notion of ‘you are not one of us’ therefore you are wrong. ala the beenstock paper has not received much play, therefore its irrelevant. climate science is not a mature discpline and involves multiple discplines. so in order to advance, which we we should all acknowledge that we still need to advance our knowledge, these cliques and groupthink must go away. that fact that tamino doesn’t think vs is woth the time is fine – but it does not mean that tamino is correct because well, he says so. vs and alex, again, thank you for pushing us all.

    [Reply: I understand your take on this situation, but from the climate science point of view the whole scientific foundation gets attacked multiple times daily, as if decades of science is suddenly proved wrong by a guy on a blog. It’s very unlikely, and because many of these claims are unfounded, scientists (and their supporters) tend to have gotten a little defensive. A claim that ‘AGW has been proven false’ is bound to get peoples defenses up. It’s almost as unlikely as claiming that smoking cannot cause cancer or that gravity doesn’t exist. Ok, not quite, but you get the point. BV]

  323. Marco Says:

    Tim, don’t know where you get your data, but in my analysis of Barrow temperature trends, I get a T-increase of 6 degrees celsius per century using the annual temperatures (slope zero P<0.0001).

    (July, notably, is 4 degrees per century)

  324. Craig Goodrich Says:

    dhogaza writes:

    “CO2 absorbs LW IR, we know that, there’s tons of observational data backing up AGW theory, etc etc.

    “B&R are essentially demanding that a large percentage of known physics be thrown in the toilet. Not only is AGW not real, but there’s a very good likelihood that they’ve just proven that airplanes don’t fly …”

    No, actually, there is no observational data anywhere backing up AGW theory. There is SOME observational data suggesting the global average temperature has increased on the order of 1 deg C over the last 150 years, but the exact figure is highly uncertain due to instrumental error and the adjustment games CRU, GISS and the rest have been playing.

    As to actual evidence for CO2-driven AGW, twenty years ago the only argument for it was, “our models [all based on astronomer Hansen’s studies of Venus] can’t reproduce recent warming without a strong CO2 greenhouse effect.” Now, reading the only relevant chapter of the IPCC’s AR4, WG1 Ch 9, “Attribution”, after two decades and a hundred billion dollars, we find the only argument for CO2-driven AGW remains, “our models [all still based on astronomer Hansen’s studies of Venus] can’t reproduce recent warming without a strong CO2 greenhouse effect.”

    As to known physics, there is no question as to the behavior of the CO2 molecule subjected to radiation in the lab. There is substantial question as to the behavior of a minute amount of a trace gas in an atmospheric / hydrological system the basic effect of which is to move immense amounts of heat and moisture from point A to point B. For example, a 10% reduction in humidity in the upper troposphere is enough to offset the entire greenhouse effect of all the atmospheric CO2 above 160 ppm, which is the minimum level needed to sustain life.

    The whole theory is as ludicrous as the assertion that anthropogenic CO2 will measurably modify the pH of the oceans — given that the oceans already contain nearly two orders of magnitude more dissolved CO2 than the entire atmosphere.

    [Reply: Leave unfounded accusations at the door before entering please. Adjustments to raw data are needed and documented and they do not influence the warming trend. There’s a lot more known about climate change than you’re aware of apparently. E.g.
    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe.
    And since you like to bash models: Why don’t you develop a physics based model that can explain both the current and past climate changes at least as good (preferably better) as current GCM’s, but without a substantial sensitivity to adding extra greenhouse gases. Then I’ll take you seriously. BV
    ]

  325. Marco Says:

    @jfr117:
    the issue isn’t so much that VS (or B&R) are not physicists, it is that they do an analysis, make large claims (CO2 hardly causes warming and it is transient), but fail to test that against the known physics.

    If you do loads of math that says bumblebees can’t fly (*), are you going to doubt your math, or are you going to doubt the observations?

    * this can be done, just take the math for fixed-wing aircrafts and a bumblebee should not even lift off. Unfortunately, it is inappropriate math for the situation.

  326. MP Says:

    I would like to draw the attention to several recent papers by Lean et al listed below, in which global temperature time series where analysed using multi-regression techniques. These studies strongly suggest that the time course and fluctuations of the temperature can be largely explained by a combination of ENSO, Volcanic aerosols, solar radiation and last but not least anthropogenic forcing, which is composed of eight different components, including greenhouse gases, land-use and snow albedo changes, and tropospheric aerosols etc.

    How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006

    How will Earth’s surface temperature change in future decades?

    Cycles and trends in solar irradiance and climate

    There is also a paper by Mike Lockwood from 2008 with a similar analysis, however he uses an artificial linear trend in anthropogenic forcing and concludes that the sun’s contribution is quite small (compared to the findings by Lean et al).

    Recent changes in solar outputs and the global mean surface temperature. III. Analysis of contributions to global mean air surface
    temperature rise

    Another even older paper from 2002 by Douglass and Clader performs a multi-regression analysis on the UAH T2LT satellite data.

    Climate sensitivity of the earth to solar irradiance

    In 2004 they submitted an update of the analysis.

    Climate sensitivity of the earth to solar irradiance:update

    One of the major issues with the Douglass papers is the use the UAH dataset, which did not show any warming at the time. After the discovery and subsequent correction of an error in 2005 the UAH t2lt showed a similar warming compared with other datasets.

    The use of multi-regression techniques has been criticized by several climate scientists, who argue that these techniques -if not applied carefully- may lead to non-robust spurious results. One of the reasons is that the different covariates show collinearity. Another issue is the way response lags for the different covariates are introduced in the regression model (using no lag, or a discrete shift or a RC-filter type of lag). And finally there is still a debate going on concerning TSI reconstructions, which results in the use of different TSI reconstructions in the different papers. By using different reconstructions different results may be (are) obtained.

    A more conservative analysis was conducted by David Thompson in the following two papers:

    A large discontinuity in the mid-twentieth century in observed global-mean surface temperature

    Identifying signatures of natural climate variability in time series of global-mean surface temperature: Methodology and Insights

    In these papers Thompson removed the effects from ENSO, large Volcanic eruptions and dynamically induced variability on the basis of
    the cold-ocean–warm-land pattern (COWL) from various temperature series. The regression of ENSO and Volcanic eruptions were performed very carefully by only considering stretches of data which showed low collinearity when removing one or the other covariate. Interestingly he found that part of the variability observed in the global temperature during the mid-twentieth century is likely to be caused by instrumental biases (another source of variability). In the second paper he finds after removing Tdyn (COWL), ENSO and Volcanic eruptions a clear monotonic global warming pattern since ~1950 (some of the data can be found here). Maybe interesting to see what ADF tests tell us about these residual temperature series. Thompson did not attempt to further remove the effect of varying solar activity or anthropogenic forcing.

    It is clear that the variability in global temperature can be largely explained by a combination of different reasonably well characterized and measured sources. Understanding the statistical properties of the global temperature time series would therefore also require a better of understanding of the statistical properties of the underling sources of variability. Furthermore models that do not include the known sources of variability are likely not robust.

  327. MP Says:

    my comment in disappeared in the spamfilter… :P

  328. KenM Says:

    The climate forcings are calculated based on physics; they are not derived from the temperature trend.

    Yes, but the *measurements* that go into those calculations are, in at least one case (aerosols), guesstimates. I have no doubts that the physics are sound, it’s the “measurements” of something where we do not actually have direct data that bothers me. And those aerosols have been used to explain a lot of discrepancy in the temperature record . A discrepancy that can also be explained by (possibly) a random walk.

    [Reply: Yes, there’s a lot of uncertainty in the (historic) aerosol forcing, but don’t confuse uncertainty with knowing nothing at all. And no, that doesn’t make a random walk remotely more likely, because in observing the whole earth system it would violate conservation of energy. BV]

  329. Bart Says:

    If your comments gets flagged (for having lots of links for example), please don’t resubmit the same or a similar comment, but rather write a very short comment to that effect or send me an email (via the link on the side). Makes my life easier to “de-spam” your comment. Thanks.

  330. Bart Says:

    VS,

    In your first comment here you wrote:

    “In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a ‘clear increase’ is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no ‘trend’).”

    Whether the ‘naked values’ in the absence of any physical meaning or context could theoretically be consistent with a random walk is a purely academic mathematics question, on which I haven’t opined very strongly. Though it seems that you backpedaled from ‘random walk’ to ‘contains a unit root’, which Alex helpfully explained in a comment is not necessarily the same.

    The physics of it all tells me that it hasn’t in fact been random/purely stochastic, since that would inconsistent with other observations and our physical understanding of the climate system (incl conservation of energy).

    Basically, a random walk towards warmer air temps would cause either a negative radiative imbalance at TOA, or the energy would have to come from other segments of the earth’s system (eg ocean, cryosphere). Neither is the case. It’s actually opposite: a positive radiation imbalance and other reservoirs also gaining more energy. Which makes sense, in the face of a radiative forcing.

    The statistics details go over my head at times, but on physical grounds it seems clear that the increase in global avg temp over the past 130 years has not been random, but to a certain extent deterministic. (see also my newer post) It’s a consequence of the basic energy balance that the earth as a whole has to obey to. Would you agree with that? If not, you would in fact be making an extra-ordinary claim that needs extra-ordinary evidence.

    Finally, different people have different sensitivities. I’m sure you would be quite defensive if I would come in at an econometrics forum and claim that the whole foundation of your discipline is wrong. That is pretty much what the B&R paper claims, and your entry on this blog (see the selected quote above) raised suspicions that that was your line of thinking as well. I also note that you have accused/badmouthed a fair number of people a fair amount of times, and called others out on their anonymity. There’s the pot and the kettle, you know.

  331. Scott Mandia Says:

    Craig,

    I suggest you go to the Start Here link on Real Climate and educate yourself. I also suggest checking out Skeptical Science.

    I hope you realize that if what you say is true then 100s maybe 1000s of scientific experts are wrong and very ignorant. What do you think the probability of that is?

  332. dhogaza Says:

    VS would be a lot easier to deal with if he were at least consistent…

    IMPORTANT:

    **I’m not claiming that temperatures are a random walk.

    From VS’s first post:

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk

    It’s easy to see how people might believe that VS is claiming that temps are a random walk …

  333. Scott Mandia Says:

    jfr117:

    I know that you are concerned that we do not understand the internal variability enough to be so confident that AGW is significant and action is required now.

    What is your biggest concern regarding action? Is it a financial concern?

  334. whbabcock Says:

    The issues being addressed in this thread relate to a single question, “Does available real world data support the hypothesis that increased concentrations of atmospheric greenhouse gases increase global temperature permanently?”

    VS has clearly pointed out that, to properly test this hypothesis, one must use statistical techniques that are consistent with the underlying characteristics of the data. As noted in the B&R paper, “… the radiative forcings of greenhouse gases (C02, CH4 and N2O) are stationary in second differences (i.e. I(2)) while global temperature and solar irradiance are stationary in first differences (i.e. I(1)).” B&R refer to five papers that have the same findings – i.e., that radiative forcings and global temperature are non-stationary to the same order.

    Ignoring the properties of the time series data used to test a theory (hypothesis) can easily suffer the “pitfall of spurious regression.” That is, you can’t look at the simple correlation between greenhouse gas concentrations and temperature (or simple transformations of these data) and accept the hypothesis that one is caused by the other. In the case before us (i.e., given the characteristics of the time series data being used), cointegration has been demonstrated as the appropriate statistical technique. This has nothing to do with the logic or correctness of the underlying theory being tested. Rather, it has to do with the statistical properties of the time series being used to test the theory – two separate issues.

    The B&R paper finds that, when cointegration is applied to available data,” … greenhouse gas forcings do not polynominally cointegrate with global temperature and solar irradiance.” Hence, available data do not support the physics based hypothesis.

    This type of statistical result simply demonstrates the relationship (or lack thereof) in available data. It is what is!! This result stands (unless there are problems in execution – e.g., the analysis was implemented incorrectly, or the data are faulty, etc.). No appeal to theory or to alternative analyses of different types of data that support the hypothesis changes this single analytical result. Again, it is what is! It is what the data are telling us. In this case the data are telling us that bumble bees can fly (i.e., real world data – observations — are inconsistent with the formulated, mathematically based hypothesis).

    What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times. And, it does mean that one can’t simply point to highly correlated time series data showing rising CO2 concentrations and rising temperatures and claim the data support the theory.

  335. jfr117 Says:

    @ bart and marco: again, vs is not attacking the physics. he is objectively statistically testing the global temperature anomaly data. i am sorry that it offends you when people ‘attack’ the theory you like, but it should be ok. if your theory is correct – it will withstand all objective attacks. if the theory is falsified, it can be reworked to be made stronger. science is never settled.

    bart if you want your blog to be a place for actual scientific discourse, then consider yourself lucky. if just want cheerleaders to expound on how much we know, then please state that. i think vs has raised the game and is a valuable asset to the climate discussion.

  336. Scott Mandia Says:

    BTW, I just read a post by Marco on OM that the B&R paper is NOT published. According to B’s vitae, it is a working paper.

    I am curious to see if and when it gets published what the reaction will be.

    That does not change what VS is speaking of it just means that the B&R paper should not be held to a high standard yet by anybody.

  337. jfr117 Says:

    @ scott – what is my biggest concern? i am concerned that ‘action’ is based on premature understanding of the climate system. and as a result, confidence in science (e.g., meteorology) will be seriously undermined in the future. i am concerned that if co2 is not the primary cause of the recent 40 yr wam period, then we will have wasted vast resources and public trust in building this case. i am concerned that a 40 yr period is insufficient to fully characterize a) the magnitude of the anomaly wrt historical temps and b) our full understanding of climate dynamics.

    i advocate more study (open to both ‘groups’) to build a true concensus of climate science. i think then we can build a more robust policy that will withstand a 12 year plateau in temps or an increase in hurricanes, e.g., politicians will not have to yell.

  338. S. Geiger Says:

    “is the case. It’s actually opposite: a positive radiation imbalance and other reservoirs also gaining more energy. Which makes sense, in the face of a radiative forcing.”

    – along these lines can we explain the current ~ 10 yr blip in temps being relatively stable? What forcings have changed to account for this local plateau? Is there some amount of uncertainty about the climate forcings or is this viewed more as a shortcoming in the available data? Or, am I completely on the wrong track and the earth has been relatively constant in gaining more energy but its not manifest in the atmospheric temp readings?…although my (limited) understanding is that ocean heat content has also been on somewhat of a plateau.

  339. Scott A. Mandia Says:

    Thanks, jfr117. It is always good to know the motivation behind one’s comments.

    You know my position is that I am convinced that waiting will have more dire consequences (including financial) than action now.

    Public confidencde in science will be far worse if predictions come true and nothing was done about it.

  340. Scott A. Mandia Says:

    What plateau?

    Each of the last three decades has been warmer than the one before and each has set a record. The 2000s were the warmest despite the 2nd half of that decade experiencing a record low solar intensity.

    Ocean heat content is also increasing over time.

  341. MP Says:

    @whbabcock,

    If I use several covariates with mixed roots (e.g. I(0), I(1) and I(2)) to create a new time series, what would be the root of this new time series?

    Does this not depend on the relative amplitude of the first and second order differences in the underlying covariates?

    And why is it from a statistical point of view “correct” to ignore known sources of variability in global T (ENSO, Volcanic eruptions, instrumental biases etc), which expectedly will affect the first and second order differences? For example ENSO (a stochastic cyclic phenomenon) comprises a significant portion of the variance in the temperature data.

    Why not remove the known sources of natural variability first and then check the order of the time series?

    also check the papers I linked above in MP March 17, 2010 at 14:40

  342. jfr117 Says:

    …the plateau of the past 12 years. its used as ammunition as proof that the science is not settled. i think that if the scientific process had been open over the past decade…and not hijacked by politics…then every storm, heatwave, coldwave, tornado or hurriance would be used as proof or disproof. and we wouldn’t have to undergo the rhetoric from both sides.

  343. Marco Says:

    @jfr112:
    An “objective” analysis of the climate data is almost impossible, as you actually need to understand the data to apply the appropriate equations.

    Moreover, we need to take into account what types of errors and uncertainty there may be present in the math. MP has made some very relevant comments of ‘issues’ with the data.

    Take all this together, add the fact that the outcome of the analysis results in some rather surprising results (same forcing=different heating and one supposedly permanent and the other transient), and it is extremely arrogant to come in and claim your analysis shows the established physics wrong (and that *is* what B&R do). I’d do some major testing to check whether my results are robust, talk to climate scientists to see if I did everything right, and at the very least point out that my statistical methods are known to have some uncertainty. As Tamino has pointed out, you can use different tests and some simply give a different answer. Loads of arguments can ensue about the appropriateness of the various tests, which already indicates opinion gets into the matter.

  344. dhogaza Says:

    BTW, I just read a post by Marco on OM that the B&R paper is NOT published. According to B’s vitae, it is a working paper.

    I am curious to see if and when it gets published what the reaction will be.

    That does not change what VS is speaking of it just means that the B&R paper should not be held to a high standard yet by anybody.

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    If it gets in anywhere, I imagine it will be some economics journal.

  345. MP Says:

    “Plateaus” also occurred in the 70’s, 80’s and the 90’s.

    This can be easily visualized when plotting the 10 year trends of the various datasets:

  346. Craig Goodrich Says:

    BV:

    “Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe.”

    No, they find increasing outgoing longwave radiation, which is consistent with slight temperature increases, which we already knew. And no heat build-up is manifesting itself anywhere, not in ocean heat content nor in average tropospheric temperature. (Murphy 2009, replete with the casual sprinkling of magic aerosols, is pure armwaving. Note that his coverage stops right at the point where we start to actually have good data.) Heat can not hide, not from the ARGO buoys and satellite coverage. It ain’t there.

    In fact, not only is there NO evidence for CO2-driven AGW, but every one of the theory’s predictions has proven not merely wrong, but spectacularly so. In any other branch of science, the theory would have been discarded more than a decade ago, but it’s been kept alive, zombielike, by billions of politically-motivated dollars and Climategate-style manipulation.

    “And since you like to bash models: Why don’t you develop a physics based model that can explain both the current and past climate changes at least as good (preferably better) as current GCM’s, but without a substantial sensitivity to adding extra greenhouse gases. Then I’ll take you seriously.”

    1) ANYTHING can reproduce any curve if you throw enough fudge factors into it. When the models all a) use exactly the same values for exactly the same parameters, and b) are fully available in source form on the Internet, I may take them seriously. To say a model that has a dozen arbitrary values tossed into it for the sake of curve fitting is “physics based” is, to put it mildly, using the term loosely.

    2) A very simple model incorporating the PDO plus a Svensmark-effect warming of around .5 deg C/century due to increasing solar activity throughout the 20th century reproduces the (supposed) surface values quite nicely without further ad-hoc aerosol jiggery-pokery and quite without any CO2 nonsense.
    ===================

    Scott: “I suggest you go to the Start Here link on Real Climate and educate yourself. I also suggest checking out Skeptical Science.”

    I have read ALL of the basic RC posts. The fundamental purpose of RC is to put out responses — however vacuous — to any accidental leaks of real science into the climate propaganda stream. I have yet to see any post there that does not consist of some combination of strawman, changing the subject, or occasionally simple obfuscation.

    “I hope you realize that if what you say is true then 100s maybe 1000s of scientific experts are wrong and very ignorant. What do you think the probability of that is?”

    Actually the relevant group — again, WG 1 Ch 9 — is much less than 50; probably closer to 20. All the rest are irrelevant; they may be right or wrong, but they are definitely hungry.

  347. dhogaza Says:

    Steve Geiger:

    along these lines can we explain the current ~ 10 yr blip in temps being relatively stable? What forcings have changed to account for this local plateau?

    Well, we have been in an extended solar minimum, that’s no secret since many in the denialsphere have been jumping up-and-down in excitement awaiting the 2nd coming of the little ice age. Hasn’t happened, of course, 2000-2009 was the warmest decade on record despite the solar minimum. But certainly lower TSI has been a slightly negative forcing.

    Beyond that you have natural variability, La Niña, the lack of a strong El Niño in the 2000-2009 time frame, etc.

  348. dhogaza Says:

    In this case the data are telling us that bumble bees can fly (i.e., real world data – observations — are inconsistent with the formulated, mathematically based hypothesis).

    What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times.

    Well, when a paper like B&R executes a statistical analysis that supposedly throws out a huge amount of physics that’s unrelated to climate science (though the fall of climate science is one result they trumpet), you’re right.

    Red flags are raised.

    Either just about everything we know about energy is wrong – not just in climate, but everywhere, inside the cylinders of your SUV, energy that heats your house, energy released by earthquakes, you name it – or B&R screwed up somewhere.

    I’ll sell you a certified and calibrated Occam’s Razor (TM) if you need help figuring out which is likely to be true. They’re right, most of physics is wrong or … they screwed up.

  349. dhogaza Says:

    Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.”

    To make clear, in case you’ve not read B&R, they reject the notion that there’s not enough or accurate enough data. They state absolutely that “AGW is disproved”.

    And that they’ve disproved radiative physics, as well …

  350. Geckko Says:

    VS is correct.

  351. A C Osborn Says:

    I have some questions for Bart, dhogaza and Scott.
    I am not a Scientist or Mathematician, but I am interested in learning the truth.
    So based on this CO2 statement, which has been accepted on here as a Fact, “The concentrations today are the highest in the past 650,000 years and likely to be higher than at any time in the past 15 million years.”

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Question 2, If we were using Ice Core data prior to 1958 why?

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

  352. Shub Niggurath Says:

    dhogoza
    “In this case, 1995-present gives p = 0.076, at least according to one person who computed it. Or a 92.4% confidence level, as it is often put.”

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’. If the p value for this turns out to be 0.076, the null hypothesis is rejected. What more is there to be said?

    The null hypothesis can of course be acccepted at higher levels of uncertainty, but that is allowed only if levels of confidence are agreed upon – *prior* to experimentation. The ‘experiment’ in this case being calculation of the temperature anomaly. Moreover the gridded temperature anomaly calculations are derived from automated weather stations from thousands of places making the sample size in question enormous. How then are we satisfied with a significance level of p<0.05? Shouldn't the levels be set much lower? I know the data elements are anomaly calculations per year and there are only 15 years in on one side in this example we are talking about (the Jones' 1995-present warming one), but arent each anomaly values representative of large samples?

    Therefore I find the 'confidence intervals' understanding in the IPCC which seems to run along Jones' statement and dhogoza explanation of it – both of which seem very puzzling.

    I've raised this issue and also seen it discussed several times – and I've always seen someone throwing a link to the uncertainty estimates the IPCC uses. How is that good enough? That link only illustrates the uncertainty reporting scale the IPCC decided to use to convey results. What is the statistical foundation of such thinking in the first place? I’ve gone through this thread and I see that VS has also raised this question. I now figure – if a professional statistician cannot understand it, there must be some merit asking this question.

    Thanks

  353. Rattus Norvegicus Says:

    Bees can’t fly debunked.

    More apropos to the situation here is the application of the wrong theory to the problem at hand.

  354. Ron Broberg Says:

    VS: By allowing for a structural break in both intercept and trend seperately, or both together, the null hypothesis of unit root presence is not rejected, in any instance.

    Conclusion: unit root GISS_all

    Am I misinterpreting something or have you done this twice now?

    “unit root not rejected” IS NOT the same as “presence of unit root confirmed.”

    You have done this here and here

    VS, I appreciate your attempt at reporting complete results – but I have not been able to follow some of your jumps when you report that the level data analyzes as “unit root not rejected” and you then summarize with the conclusion “unit root confirmed.”

  355. luc Says:

    Apparently when the math does not work we imply disrespect for the Physics. However I invite you to see this simple explenation by a Freeman Dyson who makes a clear case for spending money on data input instead of theoretical models.

  356. dhogaza Says:

    More apropos to the situation here is the application of the wrong theory to the problem at hand.

    That was my point in bringing up the bumble bee can’t fly myth …

    That the original calculation was for a fixed-wing aircraft, and that this calculation was mis-applied by an entomologist who apparently didn’t understand the model he claimed was meant to prove a bumble bee can’t fly.

    Somehow, somewhere, B&R are misapplying their tool(s).

  357. dhogaza Says:

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Precision. Frequent sampling. Free from urban sources of CO2.

    Question 2, If we were using Ice Core data prior to 1958 why?

    Presumably because that’s what was available, if true. Do you have a source stating that this is the only source that’s used?

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    Which are they? Again, please source your statements.

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

    Urban areas have considerably elevated CO2 because it takes awhile for CO2 from cars, trucks, industrial plants, etc to disperse in the atmosphere, and as it is doing so, more CO2 is being emitted.

    Most of the supposed sources for claims that CO2 was higher in the past have been contaminated in this way, or by being measured indoors, etc.

    Which actually makes these *invalid* measurements if you’re interested in the CO2 content of the atmosphere at large. Perfectly valid if you want to know the CO2 content of the air next to a freeway, etc.

    So, which valid measurements do you think are being ignored?

    And who do you think is in charge of the conspiracy to ignore that data?

  358. dhogaza Says:

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’. If the p value for this turns out to be 0.076, the null hypothesis is rejected. What more is there to be said?

    If you want to say that the difference between (say) p = 0.076 and (say) p = 1.0 is less significant than the difference between (say) p = 0.076 and p – 0.05, go for it.

    I’ve posted a link on the history of the p=0.05 choice for significance. It’s a *rule of thumb*. It’s not something that falls out of theoretical statistics. There’s no “theorem of significance” that proves that this is the “right” choice.

    Are you one of those who claim that the fact that 1995-present isn’t statistically significant to p <= 0.05 while 1994-present is, allows one to say "CRU head says 'there's no warming'"?

    You know that's not what the test says, right? It says it's actually much, much more likely than not that it's been warming 1995-present, just not *quite* strong enough to meet the iconic yet ad hoc 95% level.

  359. dhogaza Says:

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’.

    Lord. No, that’s not the null hypothesis. Not even close. Think about it.

  360. Bart Says:

    Whbabcock,

    No, that’s not what this thread is about. It’s about a few things: Whether the temperature data contain a unit root, and what the consequences would be for how to analyze the time series.
    You would be correct with your inference if AGW was only based on (perhaps spurious?) correlation, but it’s not. It’s based on physics and a myriad of observations.

    Jfr117,

    I have no problem with people rooting for a unit root. But far reaching claims that the 130 year trend is purely random/stochastic and not deterministic is at serious odds with established physics. I am arguing the perceived implications of the unit root/randomness hypothesis for our understanding of climate science. I think the implications are very limited, though I’m open to learning about more accurate ways to analyze time series.
    In what way is the existing *scientific* consensus not true enough for you?

    S. Geiger, Craig Goodrich

    As I wrote earlier:
    “I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998″ claim.”
    He responded in the affirmative. VS, wanna join me?!

    Craig,

    I said *physics based* model. Not curve fitting.

  361. Shub Niggurath Says:

    dhogoza

    I am well aware of the what a ‘p value’ signifies. I am only asking this:

    “If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    Is someone justfied, purely mathematically/statistically in making this statement? I do not wish to argue about what anyone in the media made out what Jones said in his interview.

    Of course, the caveats with all this are:
    1) This is a climatologically short period of time.
    2) The trend is still a rising one.
    3) Jones dropped the ball.

    You haven’t addressed my point that for larger samples of data – researchers usually seek higher significance levels before accepting causality/correlation.

    There are very good recent examples for this type of thinking – which similarly puzzles me. Look at the reaction to the Thailand HIV vaccine trial for example – some groups wouldn’t accept the study conclusions because p=0.039 wasn’t good enough (!).

    http://news.sciencemag.org/scienceinsider/2009/09/massive-aids-va.html

  362. Bart Says:

    Lucia makes the following comment about the random walk issue:

    There are good physical reasons to expect that global surface temperature is not a random walk. The first law of thermodynamics must apply. A warmer planet will re-radiate more heat to the universe. We might not know the price constitutive relation, but if the planet warms, it’s unlikelt to re-radiate less. (If it did, that would really be amazing!)
    So,here really is a physical law that will tend to cause the earth’s surface to hover around some typical value. If the temperature fell to that of pluto, it would certainly warm up. If it rose to that of mercury, it would certainly cool down.

  363. Craig Goodrich Says:

    Bart,
    “Craig,
    I said *physics based* model. Not curve fitting.”

    Yup. And I said, to call the models currently in use “physics based”, when they are loaded with arbitrary parameters — which may be given any number of names, they still have no measurable physical basis in the data — is an amazing stretch. The “calibration” of these models is not an exercise in physics, it’s an exercise in curve fitting; for all the correspondence to actual measured real-world data you could call their parameters “pinkbunny1, pinkbunny2, pinkbunny3, …”

    If you actually believe that we know enough about the forces and energies involved in the chaotic, hugely complex climate system to actually construct a real physics-based model, I’m afraid you simply don’t understand the science (as the RC fanboys love to say).

    [Reply: Read up on what models actually do (e.g.) before making sweeping statements. BV]

  364. MikeN Says:

    >Add to those two the negative coefficient their model assigns to the first difference of methane forcing, which is patent nonsense… But that should have been a huge red flag;

    The same thing happens in Steig’s Antarctic warming paper. When unrolled, the calculation of temperature applies a negative weight to some stations’ temperature records.

    VS, do you have a personal stats blog of some sort.
    I’ve suspected Tamino engages in some cherry-picking, but he never answered enough questions for me to followup.
    He did try to quote Ian Joliffe as an authority before, and ended up having Ian tell him he is wrong.

  365. dhogaza Says:

    “If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    Short answer, no. But scientists raise the bar far higher than that. Failure to reach the p=0.05 level of significance means just that and no more. p=0.076 means that, and it’s not the same p=0.5, which seems to be what your statement says.

    You haven’t addressed my point that for larger samples of data – researchers usually seek higher significance levels before accepting causality/correlation.

    I think it depends an awful lot on the field …

    There are very good recent examples for this type of thinking – which similarly puzzles me. Look at the reaction to the Thailand HIV vaccine trial for example – some groups wouldn’t accept the study conclusions because p=0.039 wasn’t good enough

    Well, offhand I can think of reasons for wanting a very high level of significance (p=0.01 or whatever). I am totally unaware of this specific example, but in medicine you might run into cases where a drug has serious side effects, for instance. Perhaps in this case you want to have an extremely high level of confidence before moving from trials to general use. In some cases the cost might be very high, and you want an extremely high level of confidence that positive outcomes are much higher than cheaper alternatives.

    I’m sure you can think of a lot of other possibilities.

    But your example does help point out that the specific p=0.05 value for “statistical significance” is a rule of thumb informed by practice. It’s not a fundamental property of statistics that falls out of any theorem proof.

  366. Shub Niggurath Says:

    When I said:

    ““If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    I was trying to say:
    “If the p value is higher than 0.05, the probability of temperature trends being what they are (a rising one) is for statistical purposes the same as there being no trend at all”

    Thanks

  367. dhogaza Says:

    He did try to quote Ian Joliffe as an authority before, and ended up having Ian tell him he is wrong

    Yes, that Tamino misunderstood a statement of Joliffe’s regarding Mann’s form of PCA, not that Tamino doesn’t understand statistics. Joliffe also said there was confusing jargon around PCA and it was clear that this was part of the problem. “decentered” vs. “uncentered” vs. “what did Mann actually do, actually? (joliffe said he wasn’t sure from reading the paper)”.

    Slight difference of importance than your quote implies.

  368. dhogaza Says:

    Yup. And I said, to call the models currently in use “physics based”, when they are loaded with arbitrary parameters — which may be given any number of names, they still have no measurable physical basis in the data

    Ahem. This is not true for GISS Model E, at least. The parameters are physics-based, which ultimately rest on observations.

  369. Shub Niggurath Says:

    Oops didnt catch your reply! Thanks.

    Those who view p values on a continuum are usually the ones who want to prove their theory. Those who view it dichotomously are those who don’t agree with the said theory. Isn’t that true? :)

    It is also unlikely that a scientifically sound theory/body of knowledge will go for a long time without yeilding statistically significant correlations or effects – of a fundamental nature – at some point. My opinion on this, from my brief study of climate science is that the AGW hypothesis is still hanging in the air gasping for its p value.

    My question then is, as is on every reasonable skeptic’s lips is: why shouldn’t we seek a greater level of certainty that what our data and research can afford us at the moment?

    Just as in medicine – the area that I am familiar with and work in – the stakes are high. A rearrangement of the world’s economy is sought. We should wait.

    Regards

    [Reply: You seem to strongly downplay the potential (or even likely) risks of business as usual. If a doctor tells you that you better quit smoking, otherwise your risks of severe lung illness will strongly increase, would you wait until you’re in the IC before you stop smoking? BV]

  370. Kweenie Says:

    “joliffe said he wasn’t sure from reading the paper)”

    Joliffe also said: ”

    Joliffe also said (http://tamino.wordpress.com/2008/12/10/open-thread-9/#comment-25158);
    My view is that it was used inappropriately in MBH which was one error, and that the lack of transparency was certainly another error. Were they ‘huge’ errors? Everyone’s definition of ‘huge’ will differ (now there’s a huge topic for discussion!). In isolation I certainly wouldn’t deem them so. They became larger (in importance) than would otherwise have happened because the paper continued to be cited as having valid conclusions well after it became clear that some of its methodology was flawed. If it had quietly disappeared, but the errors noted and never repeated, leaving other less controversial papers to be cited when discussing past climate, the errors would not have attained such prominence. But I guess that would have been much less fun for the protagonists on both sides …

  371. Alex Heyworth Says:

    Re dhogaza Says:
    March 17, 2010 at 18:38

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    Two net 1 w/m^2 forcings from different sources would have the exact same effect if they were both evenly applied to an object which was all at the same temperature.

    What net effect 1 w/m^2 increases in CO2 forcing and solar forcing will have on the earth’s climate depends on how they are distributed and on what the temperature is of the parts the earth that get greater or lesser forcing.

  372. adriaan Says:

    This discussion is going nowhere.. and it was such a nice thread when it started. I learned a lot. It was quite refreshing to learn from VS that you can apply a different form of analysis to the CO2/global temperature set. His expose made a lot of sense to me. But I am not a climate scientist. I am a biologist. And we have been facing similar problems, but with an immensely lower impact on humanity in the short term. But I think we did a better job.

    First of all, it was soon recognized that metadata would become crucial to the exploitation of DNA sequence information. A common data format was rapidly established, data sharing mechanisms were set up, and version control was implemented. Next came the microarray data, and their concomitant statistical analysis. It was not long before a standard protocol was established for storing the raw data and metadata. No raw data, no publication. Everyone can now download numerous datasets from experiments performed worldwide, and reuse them for their particular purpose. Many analytical and statistical tools can be downloaded and modified according to one’s desire, since it is all open source.

    I would strongly suggest to implement similar approaches in climate related data and analytical tools. And stop harassing each other. The fact that your model, based partially on physics and partially on a large number of assumptions does not agree with the statistical analysis of observations should learn you a lot: the model is not right. That’s why it is a model.

    Sorry for the OT excursion into genetic research.

    [Reply: This has nothing to do with models (who agree quite well with the observations actually). If the idea of a pure random walk goes against conservation of energy, than it’s not a random walk, even *if* the values divorced from any physical meaning are inconclusive as to their randomness. It is a physical system we’re talking about. BV]

  373. Tim Curtin Says:

    Alex H: exactly, as my data above on Point Barrow show very clearly, and especially as solar forcing at the surface is different everywhere, whether in the Arctic or at the equator, and quite different from that of TSI which is invariant everyhere at any given date. Another oddity is that the earth is round, not flat,as implied by all who use TSI, like most here – so hours of daylight are in most places less than 24, and then there are angles, such as at Barrow, where horizontal direct and diffuse solar radiation forcings are quite different from those at places like Hilo in Hawaii.
    Bart: go figure – the global is made up of the local, ignored by most in this Blog except for temperature. My micro Barrow data refute all the macro claims you have made.

  374. dhogaza Says:

    It is also unlikely that a scientifically sound theory/body of knowledge will go for a long time without yeilding statistically significant correlations or effects – of a fundamental nature – at some point. My opinion on this, from my brief study of climate science is that the AGW hypothesis is still hanging in the air gasping for its p value

    Based on 1995-present not quite reaching statistical significance at p <= 0.05?

    What's special about that timeframe?

    What's wrong with 1994-present? It meets the de facto test of statistical significance.

    I think you're playing games here …

  375. DML Says:

    I note from your plots that there is an increase in temperature beginning in ~1910 of about the same rate and duration as the one that begins in ~ 1945 (I believe that Jones of UEA recently remarked on this). Furthermore, I understand that the significant increase in atmospheric CO2 did not begin until ~ 1945. Application of Mill’s method differences results in the inference that CO2 is not a significant causal factor in either rise. Thoughts?

    [Reply: CO2 is not the only driver of climate (see e.g. this graph), but esp over the past 3 decades it has become the most important one. Also in climate changes in the earth’s past CO2 played a major role, see e.g. this excellent lecture. BV]

  376. dhogaza Says:

    I would strongly suggest to implement similar approaches in climate related data and analytical tools.

    Geez … let me guess …

    You’re probably unaware that the entire set of raw and adjusted data used to compute GISTEMP is online and in summary form available for free?

    (If you want DVDs of the scans of the individual pieces of paper forms on which weather data have been traditionally recorded, you’ll have to pay production costs, but they are available.)

    Care to guess when the GHCN data was first made available? Was it due to external pressure? Hardly. It first came online in 1992.

    18 years ago.

    This is just one example. There’s scads of available data.

    Like every other field of (non-proprietary) science, climate scientists have responded to the advent of cheap storage, cheap computers, and cheap internet connectivity by making data available. They publish in journals like Nature and Science just as people in your field do, and meet the same disclosure requirement for non-proprietary data.

    Why do you assume that it’s any different in climate science? I suspect it’s because you’ve gotten your information from denialist sites.

    The fact that your model, based partially on physics and partially on a large number of assumptions does not agree with the statistical analysis of observations should learn you a lot: the model is not right. That’s why it is a model.

    Where did models enter into this? And, of course, there are several possibilities here …

    1. fundamental physics (not “a model”) has been overturned (if you believe B&R, and while VS claims not to be saying that, he’s treating B&R as authoritative)

    2. someone’s goofed up in their choice of tests

    3. need more data

    However, we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun. Perhaps you don’t know how fundamentally wrong it is to say that. If you do, you must know that a line of statistical arguments that leads to that conclusion is fatally flawed.

    Anyway, best of luck overturning tons of physics. You’ll need luck, and a thick skin …

  377. Beaker Says:

    Adriaan, I find your reasoning quite strange, regarding different models. You seem to conveniently forget that statistical models have quite a lot of assumptions as well, and that those assumptions may not agree with the real world either. I can find all kinds of unlikely associations and relationships that have nothing to do with reality. VS started this thread by claiming temperature as a random walk, although he seems to have stepped away from that claim later on. But that is pretty unlikely from a basic physics viewpoint. It would tell me that perhaps the statistical model is wrong.

    In fact, as far as I can see, physical models restrain the values that certain parameters can take, and that is a good thing. It makes sure that your parameters aren’t going to do something that is physically unlikely, if not impossible. I have no clue why you would put your trust much in a statistical model over a physical model. If the two do not agree, I´d opt for the physical model most of the time.

  378. adriaan Says:

    @Dhogaza,

    I am not unaware of the fact that I can download GISS, GHCN etc. You make the false assumption that I am unaware of the problems in your field of science. I am only complaining about he fact that GISS and GHCN do not bother to have a good version control system, and that they continuously change their datasets without proper notice. And that I cannot go back to a previously published dataset to repeat a given analysis exactly with the original data, because they are not archived.
    And I am not dumb enough to start discussing that 1W(sic)/m^2 radiative forcing is different coming from the sun or any other source. What I do not accept is that a measured value of coming from the sun is equivalent in importance to a computed value. Thats where the models come in.

    And I do not need to turn over tons of physics. If I manage to turn over one single rule in your model, the model will be flawed. And that is exactly what VS was doing, pointing out that the correlation between CO2 and global temperature is depending upon whether the data are I(1) or I(2) class. And his conclusion was that your model was wrong.

    [Reply: His conclusion is that standard statistical procedures such as OLS are not valid, *if* indeed there is a unit root present. His conclusion is not (oe at least shouldn’t be) that physical based climate models are wrong. It has no bearing on that. BV]

  379. adriaan Says:

    @Beaker,

    I know that statistical analysis needs information about the data in order to get the most out of the information in the data. But you can analyse the data without any a priori knowledge, and make statements about what the data reveal, or what they cannot reveal. A statistical model is only relevant for the data, and does not necessarily be in agreement with a physical or biological model. Limiting the range of parameters on the basis of physical or biologiacl or other knowledge can improve the performance of the analysis. But assuming that one knows the limits of the parameters also conveys risks and can introduce unwanted bias. That you opt for the physical model is ok, but I am afraid that you do not have the full overview of the physical model with all its limited parameters. As has been stated in this thread, we do NOT know all the details of the processes that are being modelled. A major factor is that everything is expressed as being global, wheras most of the extremes are local, and compartimentalized by sea currents, mountains, whatever. In biology, compartimentalisation is one of the biggest challenges. No model of cellular activity has succeeded. And we can do thousands of experiments per week to test our hypotheses. So where does that leave you?

  380. Alex Heyworth Says:

    dhogaza

    However, we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun.

    You seem to be fixed on this point. The additional data we need to determine that they have the same effect is that they are both evenly applied to an object with a uniform temperature. Since neither is the case when applied to the earth’s climate system, you do not have an argument (unless you could demonstrate empirically that the effects were the same).

    [Reply: Different forcings (of the same nominal value) can have a slightly different temperature effect indeed (as indicated by the “efficacy“, but they generally differ by a few 10s of a percent, not by a factor of 3. BV]

  381. nigguraths Says:

    dhogoza
    I said that the anthropogenic global warming theory is gasping for its first p value. Not the global warming theory. No tricks. :)

    VS says here that climate models are phenomenological, Richard Lindzen says as much. Hadi Daulatabadi is saying that the earth’s climate system may compensate for greenhouse forcings which may result in changes, without raising temperatures. The said phenomenological models examine temperatures as an end-result which are then postulated to *cause* changes.

    Now the climate community may grasp this cause/effect corruption nuance, but I don’t see them ever explaining this. The flag of temperature always flutters high. Why is that?

    Every non-temperature based effects-in-the-real-world WG2 argument flounders today. Many WG1 scientists are openly derisive of WG2 arguments. Yet the heat-trapped in the system which caused their precious temperature rises should have showed up in the WG2 arena – proving and supporting their hypothesis. But that is precisely where the wheels come off. Why is that?

    Regards

    [Reply: Climate models are physics based, notwithstanding sweeping statements to the contrary. Some wg1 scientists have the view that the wg2 report does not quite have as solid of a scientific backing as the wg1 report. That’s partly due to the nature of the fish: It’s a ‘softer’ science, and the scientific literature base is thinner. This is now mentioned more often, because the alleged errors were mostly in illustrative examples in wg2, and have been (ab)used to discredit the whole of (wg1) science. BV]

  382. adriaan Says:

    @nigguraths,

    That is because the model derived results are openly discussed as being as important or even more important than actual observations. This point has also been raised by VS in one of his first posts on this thread.

    The observations are not in agreement with the physical model, so the observations must be wrong.

    What can I do?

    [Reply: Strawman argument. No scientist has made such a claim. The observations *are* in agreement with the physical models. Hansen states in pretty much any talk he gives that our understanding of climate change is based on three pillars: Current obervations, paleodata, and physics based modeling. He notes that the former two are the most important. BV ]

  383. Scott Mandia Says:

    BTW, the global CO2 ppm as measured by the Atmospheric Infrared Sounder (AIRS) satellite agrees well with Mauna Loa measurements. See graphic below that I just generated using NASA Giovanni:

  384. PaulW Says:

    CO2 might be increasing, but the “CO2 forcing” is “Missing” according to Trenberth’s latest paper.

    The situation almost looks a little random to me.

  385. S. Geiger Says:

    PaulW – is there a link to the paper (or at least abstract) and/or a discussion of his (Trenberth’s) paper.

    Thanks

  386. Mike Says:

    VS,

    I’m am currently pursuing my PhD in statistics and recently this thread was brought to the attention of the fellows in my cohort. I must tell you we have all thoroughly enjoyed your contributions to this thread and your repeated refutation of what appears to be widespread nonsense. I believe you have single handedly converted at least two true believers!

    [Reply: The presence (or not) of a unit root does not negate basic physics (e.g. conservation of energy). BV]

  387. Eli Rabett Says:

    Since it has become a small issue here, as well as some satellite measurements, there were a fair number of sampling flights in the 1970s that measure CO2 concentrations above the boundary layer in the free atmosphere and agreed with the Mauna Loa measurements and those from other sites.

    The earlier measurements mostly suffered from bad siting (don’t measure in the middle of Paris), bad timing (measurements in agricultural areas have huge daily swings), bad calibration (in most cases, what calibration) and bad chemical technique(the various titrations are tricky) although you can go through them and find occasional ones that are usable

  388. stereo Says:

    PaulW Says:
    March 18, 2010 at 02:47

    “CO2 might be increasing, but the “CO2 forcing” is “Missing” according to Trenberth’s latest paper.

    The situation almost looks a little random to me.”

    You have misunderstood what Trenberth is saying. He is not saying that the forcing is missing, he is saying it is hard to track the energy in such a complex land/ocean system.

  389. Tim Curtin Says:

    [edit] dhogaza: “we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun”: I assume that right now it is night where you are? What is the radiative forcing from the sun as you sleep where you are? And what is the RF from CO2, which according to the IPCC (and Piet Tans at Mauna Loa) is the practically the same night or day at ALL parts of the globe on any given day? No doubt, you are right about RF and SSR having the same effect if both are 1 W/sq.m at the SAME location and at the SAME time, but 1 W/sq.m of SR at top of the atmosphere is much less than that at the surface of the globe.
    Moreover, the so-called radiative forcing, which for CO2 was 1.66 W/sq.m in 2005 (WG1, AR4, p.141), is supposed to be radiation prevented from leaving the earth’s atmosphere and is therefore additive to the incoming radiation from the sun of 1,365 W/sq.m, for a total of 1,366.66 in 2005, when the TSI is incoming, but presumably the RF is busy on its own at night!
    The data I use are for average daily “global” (=total) direct+diffuse solar radiation expressed as Wh/sq.m, i.e. Watt hours per square metre per day, stated as averages for the month in question, so less in January in NH than in July. Now the IPCC simply states Radiative Forcing in W/sq.metre, and implies that it is invariant at any given location to either day or night or season (loc.cit.).
    Finally, from AR4 WG1 p.141 we can deduce that there are 0.01469 ppmv of atmospheric CO2 to 1 W/sq. m. of radiative forcing, so, given the Wiki climate sensitivity of 0.8K/W/sq.m:
    1880 2005 2100
    CO2 ppm 280 379 560
    Rf W/sqm. n/a 1.66 2.658938
    GISSTemp 13.87 14.65 15.44915

    Given the supposed logarithmic relationship between radiative forcing and climate sensitivity, even 15.45 oC for GISStemp when CO2 has doubled seems an over-estimate. Whence the claimed 3oC for doubling of CO2?

  390. kim Says:

    As I walk along
    Random thoughts come in my head.
    Wander and wonder.
    ===========

  391. Eli Rabett Says:

    One significant point lost in the statistics, is that the most of the pre-1960 or so forcings DO NOT HAVE ANNUAL RESOLUTION. Some of them are pretty thinly spaced, like a decade or more, and have simply been interpolated. You have to dig down about two levels in GISS and then link out to the actual source to see this. For example Ethridge, et al. on CH4

    Except for the strat aerosols, everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero, and yes, VA, in that case, first or second differences ARE equivalent to differentiation. The strat aerosols have their own issues

    AFAECS, this knocks a lot of the statistical flim-flam into a cocked hat.

    And oh yeah, Eli has played this game before with econometricians. About 100 more comments needed to beat that one.

  392. Alex Heyworth Says:

    BV, thanks for your reply to my comment above. I gather from looking at the tables referenced in the link you gave that efficacy is basically set so that current efficacy of CO2 forcing is = 1. Solar forcing efficacy at current levels of TSI is fairly close, in the range 0.91 to 0.97. (I am reading from the Ea column on the tables.) Does this sound right, or am I misreading them?

    [Reply: Sounds about right. BV]

  393. Alex Heyworth Says:

    PS, the tables are at http://data.giss.nasa.gov/efficacy/table1.pdf and http://data.giss.nasa.gov/efficacy/tables3n4.pdf

  394. VS Says:

    Hi Ron Broberg,

    My apologies for making you wait. I wanted to answer your question earlier, but ‘stuff’ got in the way.

    The question you pose has more to do with the methodology of statistical testing, than with anything else. Every statistical (hypothesis) test is basically constructed as follows. Do allow for some ‘informality’ here, for the sake of exposition:

    (1) we set a null hypothesis (H0)

    (2) we derive the distribution of the test statistic under the H0, mostly analytically but in some cases via simulation. (Sometimes we also derive distributions under various alternative hypotheses, but this is very technical stuff, so I’ll leave it there for the moment)

    (3) we set the maximum ‘deviation’ from the null hypothesis we will tolerate, before rejecting the H0 (the so called critical value of the test statistic, corresponding to our pre-chosen significance level)

    (4) we calculate the test statistic for our sample realization, and draw our conclusion by comparing it with the critical value set in (3)

    We can therefore never ‘accept’ a null hypothesis. We can only reject it, or fail to find sufficient evidence to reject it.

    Now, in the case of unit roots, the various tests have different null hypotheses:

    ADF – H0: presence of unit root
    KPSS – H0: stationarity (no unit root)
    PP – H0: presence of unit root
    DF-GLS – H0: presence of unit root
    ZA – H0: presence of unit root

    So, in the cases of the ADF, DF-GLS and ZA, we concluded that there is no sufficient evidence to reject the null hypothesis of a unit root.

    Using the KPSS, which has the opposite different null hypothesis, namely the absence of a unit root. Applying the KPSS testing procedure, we however do reject the null hypothesis of no unit root.

    This is a standard statistical procedure to assess the series in terms of unit roots. My ‘conclusion: unit root’ statement applies to the final inference we make considering all the test results. So while I understand how you come to your idea, and your logic is correct, it is not applicable in this instance.

    Allow me to connect this to some standard jargon used in science.

    Note that in a, say OLS, regression we often refer to a coefficient as ‘statistically significant’. What we actually mean in such cases, is that the H0 that the coefficient is in fact equal to 0, is rejected at the chosen significance level.

    In any case, I hope this helps with interpreting the test results.

    ——————-

    Rabett should first read up on some econometrics before ‘playing games with econometricians’. I responded to his claims elaborately enough, here.

    I think it ought to be clear why I’m not debating anything with individuals of this ‘caliber’.

    PS. NO G&T DISCUSSION PLEASE. I elaborated my position on the G&T issue below the linked post. If you really need to comment on (my position relating to) G&T, do read the relevant posts first, please.

    ——————-

    Hi Mike,

    thanks, I appreciate the comment :) The discussion was in part meant for ‘my people’.

    And Bart, you replied there (although I don’t think that’s the ‘nonsense’ Mike was referring to):

    [Reply: The presence (or not) of a unit root does not negate basic physics (e.g. conservation of energy). BV]

    Thank you, can you now please proceed to explain that to all the ‘contributors’ here claiming otherwise?

    ——————-

    I would like to draw the readers’ attention to this comment by whbabcock.

    Very important methodological issues, that I’ve (obviously) failed to address as eloquently as whbabcock.

    ——————-

    Finally, Bart, it seems our ‘little’ discussion here has received quite some exposure ;)

    [Reply: I’ve been trying to make that point repeatedly now: The presence (or not) of a unit root has no bearing on the basic physics. The presence of a random walk however is inconsistent with basic physics. There is now a whole chorus here (from the exposure you indicate I guess) desperate to claim that climate models and AGW as a whole is now suddenly junk because there may be a unit root in the temp series. That is of course utter nonsense. Do you agree?
    Yet you surprise me again in ‘recommending’ whbabcock’s comment. I replied to him here. He makes exactly the incorrect inference as I described just above, as if AGW is falsified by the presence of a unit root. BV
    ]

  395. Josh Says:

    Completely on topic and amazingly well timed – did anyone else catch this?

    http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong

    “For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings….” + another few thousand words.

  396. Dave McK Says:

    VS – please accept my thanks for upholding AND defending a standard.
    ‘what would you expect when you walk into a church and say there’s nothing supernatural’? lol – you really brought out the juice.
    It is malice, in case you doubt. They live off the benefit of doubt. That is the nature of their malice.

    Besides, chem 101 students can prove that water vapor in any unit of atmosphere carries 50,000 times the energy of the CO2 component (at the very least), so the fetishists are never going to find the driver in the residuals.

    [Reply: Quit the namecalling.
    Nitrogen is still way more abundant in the atmosphere than any other species and it does nothing. The effect depends on pysics. Water vapor is a greenhouse gas, but acts purely as a feedback, not a forcing. BV
    ]

  397. Dave McK Says:

    That is 50,000 times when there’s 1% water vapor and 500 ppm CO2, incidentally, to identify the context of the calculation.

  398. DML Says:

    Your repeated reply to VS and those who express agreement with him basically consists of the following argument:

    If VS is correct, basics physics is wrong.
    Basic physics is not wrong.
    Therefore, VS is wrong.

    But this is not a sound argument because the first premise is false. Both basic physics and VS can be correct because VS’s analysis concerns a climate model, and climate models contain more than basic physics – they include hypotheses in the form of assumptions and simplifications, and some of those hypotheses can be wrong even if the basic physics in the model is accurate.

    It is still possible, of course, that VS’s analysis is wrong. But since your argument is a non-sequitor, it doesn’t challenge his analysis.

    [Reply: This isn’t about climate models at all. Where does everyone get that claim from? You so badly want to find fault with them, is that it?
    The conclusions of a random walk is inconsistent with physics. The fact that it isn’t straightforward to reject the presence of a unit root has not bearing on the physics, lest many people want to make it appear as such. That is what I’m argueing against. BV
    ]

  399. Ian Says:

    VS, could you summarize the implications of your analysis? Like some other readers, I thought in your initial comments that you wanted to turn a blind eye (at least for this analysis) to physical understanding of the temperature record and simply apply ADF-style analyses to the dataset. As did some other readers, I inferred when you said “no trend” that you concluded global temp was, in the main, a random walk. In later comments I think you were arguing that you didn’t think this, but I’m not clear what you meant.

    Or, to restate my question, what implications does your analysis have for our physical understanding of climate? If it’s something simple (say, a restatement of the fact that short time scales are not useful in relating CO2 forcing to temperature, or that greenhouse emissions and concetrations in the atmosphere are not related linearly), then fine – but it would be nice to have a clear statement of what you think. If something more novel, that a concise summary would be appreciated.

    [Reply: Seconded. BV]

  400. A C Osborn Says:

    dhogaza Says:
    March 17, 2010 at 20:41
    “Presumably because that’s what was available, if true. Do you have a source stating that this is the only source that’s used?”

    I was hoping for an answer from Scott as they were his statements, but as you have responded –

    As a very knowledgable person on Climate Research are you actually saying that you don’t know where the CO2 data that is so important to the research actually comes from prior to 1958?

    The reason that I ask is that it is a virtual straight line for the last 10000 years before 1958, with Wars, Volcanoes and greatly varying temperatures, shouldn’t there have been some changes in the CO2 level?

  401. A C Osborn Says:

    dhogaza Says:
    March 17, 2010 at 20:41
    And who do you think is in charge of the conspiracy to ignore that data?

    Conspiracy? Wow do you think there is a Conspiracy then?
    I was trying to establish the reasoning/decisions for using what is currently used. Afterall the whole basis of Green House gases goes back a long way, are you saying that they couldn’t accurately measure Atmospheric CO2 when that was first propsed?

  402. Jimmy Haigh Says:

    [edit. Comment on the substance, not on the person. BV]

  403. Marco Says:

    @A C Osborn:
    Volcanoes contribute, on an annual basis, about 1% of the total anthropogenic emissions of today. That already is negligable. More important, however, is that there is no evidence of periods with markedly increased or decreased volcanic activity over the last many centuries which would result in a significant reduction or increase in CO2 emissions.

    There have been some variations in the few centuries (see e.g.
    http://zipcodezoo.com/Trends/Trends%20in%20Atmospheric%20Carbon%20Dioxide_2.gif ), but don’t expect wars or volcanoes to have been a major factor. The latter may have been one source of decrease, though, through temperature changes. But the effect is limited: the six degrees temperature increases during interglacials shows a 100 ppm increase in CO2. We’re already at the same increase with a ‘mere’ 1 degree increase.

  404. Bart Says:

    ALL: Please put comments that are not on topic in the open thread. The topic here is (statistical properties of) the temperature record (and its implications).

    Assertions that climate science is bogus because of x, y or z belong in the open thread (as do all other topics besides those mentioned above). Before making such assertions, please check what the science has to say about it e.g. here and take that into account in your comment.

  405. A C Osborn Says:

    Bart Says:
    March 18, 2010 at 15:09
    So Scott posts data and I can’t ask where he gets it or if he understands it. OK Bye.

  406. IanH Says:

    Ian @ 13:56 Says
    VS, could you summarize the implications of your analysis? Like some other readers…

    As I understand VS has looked at the temperature record (note not the models, not the physics) and determined that the record demonstrates temperature is I(1), the GHGs are I(2), yet the models, and nearly all climate researches are trying to fit a linear regresssion – oops won’t work, can’t work. He’s not as I understand it arguing with how the model is constructed, or how the earths physics works, just that you can’t wire the known physics together as per the GCMs. I’ve never seen him say because the temp is I(1) physical processes themselves are wrong, he’s not gone there, because as he says it’s not his field, it is now the job for the modellers to rethink theur GCMs

    [Reply: No. His analysis has no bearing on GCM’s or on our understanding of radiation physics. It may have bearing on the uncertainty estimate of a linear trend. Something entirely different. BV]

  407. nigguraths Says:

    BV
    This isn’t about climate models at all. Where does everyone get that claim from? You so badly want to find fault with them, is that it?

    This is related to the models. Because if VS is right – it is upto to those who argue in favor of the models being representative of the climate reality, to demonstrate their models incorporate this nature of temperatures.

    All parts of the AGW hypothesis have to work for it to be accepted.

    [Reply: This is related to the validity of OLS; *not* to the validity of physics based climate models. GCM’s don’t incorporate the temperatures, they try to simulate them.
    For you to claim that AGW is wrong you’d have to substitute it for a theory that works even better at explaining all the data. I’ll be waiting (not). BV
    ]

  408. mpaul Says:

    “VS, could you summarize the implications of your analysis?”

    I would strongly advise VS to not answer that question. Lets just stick to the narrow topic at hand. The implications should be left to others. VS has simply proven that GISSTEMP and CRUTEM3 are I(1). QED.

  409. VS Says:

    Hoi Bart,

    Thanks for your reply. You are absolutely correct in stating that the presence of a unit root in the temperature series doesn’t automatically ‘disprove’ AGWH.

    I never made that claim.

    Again, for the record, I was (too) loose with my wording when I stated that temperatures are a ‘random walk’. I also said, ‘statistically speaking, a random walk’. I even stated on March 5th:

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.”

    If you read my posts carefully, you will see that I (formally) invoked the random walk model in order to explain the idea behind a unit root, the simplest of all processes containing a unit root. Had this been a ‘normal’ debate, this would have been sorted out in three back-and-forth posts.

    Anyway, what the presence of a unit root does do however, together with the two unit roots found in various GHG forcings, is indicate which statistical method we need to apply in order to analyze the time series. As I stated earlier, we first establish the I(1) (here and here) and I(2) properties (here) of the series (temp and GHG’s, respectively), and then we proceed with (polynomial) cointegration in order to establish (or reject) any (non-spurious) correlation in the time series records.

    I, and others, have elaborated extensively on why ‘regular’ (multivariate or not) OLS regression analysis, including the calculation of confidence intervals of deterministic trends, is invalid in the presence of a unit root. This is important. Note that cointegration is the method for analyzing series containing unit roots. This too, is important.

    Now, had we not been distracted by various ‘refutations’ posted on various ‘science’ blogs, we would have arrived at cointegration (as applied by both K et al, and BR) much earlier.

    I’m eager to continue on this topic. For all those that ‘can’t wait’, here’s a post on cointegration analysis which I found very illustrative in this context. I admit I haven’t double checked everything posted there – this discussion here is already taking up way too much time – but David Stockwell (the author of these posts) looks like he knows what he’s doing.

    A nice test (in-sample forecast) of BR is then performed here. That last one should be very interesting for those already familiar with cointegration.

    I recommend the actual posts to all truly interested in the theory behind cointegration, as well as the actual (not ‘straw-manned’) contents of the BR paper.

    As for that post by whbabcock. What I liked there is the methodological frame he erects. I completely agree with his assertions in that sense (i.e. ‘analytical fact’, save errors in execution/data).

    The rest, we’ll reproduce and evaluate, like real positivist scientists ;)

    [Reply: I updated my newest post to include your clarification re ‘random walk’, where I explain that I have no particular beef with unit roots, but that pure randomness is unphysical. Many other commenters however seem to hold up your thesis here as a smoking gun that slams GCM’s and disproves AGW. Why don’t you join me in setting them straight? BV]

  410. Bart Says:

    ALL: In a new post I explain my view on the relevance of the unit roots. It does *not* mean that GCM’s are now invalidated or that AGW is now on its knees.

  411. Scott A. Mandia Says:

    I did not reply because dhogaza beat me to it but he shares the same view. Eli also responded today with the same arguments.

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Direct measurements, if done correctly such as Mauna Loa data, are always better than interpreting values from proxies.

    Question 2, If we were using Ice Core data prior to 1958 why?

    Most of these locations are remote so there is less chance of siting issues. Furthermore, humans were not taking measurements for most of the 650,000 year ice record.

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    and

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

    Because it is extremely unlikely that they are a valid representation of global averages. We have a very good approximation of how much carbon is being emitted today by human sources and nature hasn’t changed much since 1940s regarding natural source/sink issues. There is no explanation for a 1940s 375/380 ppm value other than those values are tainted somehow.

    BTW, I show data regarding CO2 from volcanoes and from nature vs. humans at the two links below:

    http://www2.sunysuffolk.edu/mandias/global_warming/global_warming_misinformation_volcanoes.html

    http://www2.sunysuffolk.edu/mandias/global_warming/global_warming_misinformation_nature_emits_more_co2.html

    The links below shows locations of ice cores:

  412. Shub Niggurath Says:

    “For you to claim that AGW is wrong you’d have to substitute it for a theory that works even better at explaining all the data.”

    No. Nothing like that needs to be done.

    The IPCC scenarios (and thence climate models) propose a range of sensitivities and feedback strengths. Since there is no one study that we can discuss, if we take the AR4 to be representative of the sum of the physics of AGW- we are already in the realm of the unfalsifiable.

    Present-day temperature rises are stuck on the shore of Jones’ p values and the millenial record is stuck on shore of the hockey stick.

    Forgive my colorful language, but I can substantiate my claims. As a side question, are there any physical processes that climate scientists consider cannot be modelled? I know you guys model mosquitos and microbes like algae.

    [Reply: Of course you’d have to replace it by something better. Why would you thow it away otherwise? Do you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis. BV]

  413. A C Osborn Says:

    Scott A. Mandia Says:
    March 18, 2010 at 17:05
    Direct measurements, if done correctly such as Mauna Loa data, are always better than interpreting values from proxies.
    So you are saying that scientists between the time that they identified the “Greenhouse” gases and 1958 were not capable of taking measurements that were better than the very poor approximation provided by Ice Core Samples?

  414. Pat Cassen Says:

    A C Osborn – You can look this stuff up. “THE PRE-INDUSTRIAL CARBON DIOXIDE LEVEL”, T. M. L. Wigley, Climate Change, 5, 315 (1983)

  415. A C Osborn Says:

    Pat Cassen Says:
    March 18, 2010 at 18:11
    OK I read it, are you still saying that an Ice Core simulation (which are known to NOT replicate modern CO2 levels) are better than those of the Scientists that went from 1880 to 1958?
    I am not talking about prior to 1880 or even 1900.

    [Reply: Here’s a graph of temp and CO2 (Law Dome and Mauna Loa) starting at 1880. They line up very well. The problem with CO2 measurements is that you need a site that is not affected by emissions but preferably not by vegetation fluxes either, plus you need long term continuous measurements. That is why Ralph Keelings work was so groundbreaking. BV]

  416. ScP Says:

    Josh, a great article and as worth reading as this exceptional thread – this is the best post and discussion I have read this year.

    Thank you VS for the clarity with which you explain and thanks Bart for hosting the post!

    By the way, Josh, are you any relation to the other ‘climate’ Josh? – see http://www.cartoonsbyjosh.com

  417. Marco Says:

    @A C Osborn:
    What ‘simulation’ are you talking about? Those are direct measurements. And the Law Dome data fit quite well with the Mauna Loa data.

    What we know (yes, know) from the pre-1950s data is that local ‘contamination’ of the measurements is very very likely. We can still show that, just make a trip around the countryside with a CO2 analyser. You’ll get wildly varying numbers depending on time-of-day, wind force and direction, location, height, etc.

  418. Pat Cassen Says:

    A C Osborn – Not sure what your problem is. From Wigley, 1983:

    “There are 19th century data from the southern hemisphere … These data are of high quality (comparable with the best measurements made prior to the 1950s) and may well be the only 19th century data available which are unequivocally free from local or regional pollution effects. Many of the measurements are significantly less than the commonly assumed ‘pre-industrial’ value of around 290 ppmv first suggested by Callendar.”

    And see Marco, above.

  419. HAS Says:

    A couple of observations, then a question of clarification or two and a comment.

    First in terms of the initial discussion about confidence limits on forecasts, there isn’t just a problem with the specification of the model, there is as far as I can see little regard given to the systemic variability in the underlying measures (temp and GHG).

    Second the passage of time causes nothing, so time is basically a surrogate variable. The interesting issue when testing models etc is what is it a surrogate for. As I understand it time series analysis is helping to identify the characteristics of the systems (i.e. models) that generated the series.

    To my questions of clarification.

    My understanding then is that given that temp (as measured) is I(1) and GHG (also as measured) is I(2) the problem goes beyond simply the issue of what statistical tests should be applied.

    First it says that any simple linear relationship between them will be an invalid model because under these circumstances and temp being I(1) implies GHG is I(1), which isn’t not. Is this correct?

    Going further any GHM that fails to generate temp as I(1) and GHG as I(2) has to be rejected as a valid model (taking into account the fact that some of the GHG is exogenous). Is this correct?

    Finally for those that think the rule of physic reins supreme through all this I suspect you are ignoring the fact that CGM are complex constructs using various physical sciences as inputs, but dealing with significant uncertainty. When a complex engineered system fails we don’t blame the laws of physics we blame the accuracy of the modelling.

    In fact the issue of the bumble bee (referred to a few times) is instructive, but not for the reason raised. Model builders using the laws of physics claimed the bee couldn’t fly, empirical observation shows it can so the models need to be revised.

    By analogy model builders using the laws of physics conclude temp and GHG have certain attributes, but observations of them show this not to be the case. The models need to be revised.

    [Reply: The issues discussed here have no bearing on GCM’s. Why do you bring them up? BV]

  420. Kweenie Says:

    “o you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis.”
    .
    Continuing your metaphor, in that case I would ask for a second opinion. Which I believe in medics is not unsual and not considered to be a bad (skeptic?) thing.

    [Reply: So would I. For climate, see e.g. here or here if you were to *randomly* search for a second opinion. BV]

  421. Kweenie Says:

    “[Reply: So would I. For climate, see e.g. here or here if you were to *randomly* search for a second opinion. BV]”

    Using quotation marks for “randomly” is adequate looking at the links. One might prefer to go to a different “hospital” (http://www.co2science.org/data/mwp/mwpp.php)?

    [Reply: That “hospital” (or “doctor” I might say) was most definitely not randomly picked. It’s like the situation where you dislike your MD’s diagnosis, and you specifically seek out the one doctor in the nearest 100 km from whom you know that (s)he thinks smoking is not bad for your health. BV]

  422. Shub Niggurath Says:

    Of course you’d have to replace it by something better. Why would you thow it away otherwise? Do you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis.

    You dont have to replace the theory of anthropogenic global warming with anything. We think our measurements tell us that the globe is warming – stating that is enough. I differentiate AGW from GW always. GW is just observation – we are all fine with that.

    No diagnosis has ‘inherent uncertainty’. The uncertainty is in the face of less evolved disease, un-investigated physical findings and limits of knowledge. Doctors should strive to reach a more accurate diagnosis but say “we don’t know” when they don’t. Many patients die without a diagnosis. Many other patients get stuck with some label and they die anyway.

    Patients are more angry at dishonest doctors than doctors who cannot diagnose what they have.

    [Reply: So I guess you’re fine with a medical diagnosis stopping at the point where the thermometer sais you have a 40 degree fever which is slowly getting worse? YOu wouldn’t want to know why, and to then be able to do something about it? BV]

  423. dhogaza Says:

    VS: you agree with at least part of B&R’s analysis, which in its entirety leads to conclusions that are physically impossible.

    I suggested some time ago that your time could be more profitably spent, perhaps, by figuring out where they’ve gone wrong, and why.

    Because unless you do so, there’s really no reason for us to place much credence in what you’ve done. If your recreation of part of what B&R have done includes one or more of the horrific blunders they’ve made, well … the implications are obvious.

  424. dhogaza Says:

    A nice test (in-sample forecast) of BR is then performed here. That last one should be very interesting for those already familiar with cointegration.

    I recommend the actual posts to all truly interested in the theory behind cointegration, as well as the actual (not ’straw-manned’) contents of the BR paper.

    The fact is that B&R’s full analysis lead to horrifically impossible conclusions.

    You claim this:

    1. I’m not ‘disproving’ AGWH here.
    2. I’m not claiming that temperatures are a random walk.
    3. I’m not ‘denying’ the laws of physics.”

    If you agree with Stockwell’s lead paragraph at the site you linked:

    Beenstock’s radical theory needs to be tested. As discussed here, he proposed that CHANGE in greenhouse gases (delta GHGs or dGHGs) not absolute values produces global warming.

    Then you are declaring that *some* laws of physics have been overturned, as as their application lies at the heart of CO2-forced warming (regardless of the source of CO2) then you necessarily are rejecting current AGW theory, as well.

    That would make your statements #1 and #2 false, though presumably it’s due to your not understanding the consequences of Stockwell’s restatement of one of B&R’s key conclusions rather than dishonesty.

    If you disagree with the unphysical conclusion made by B&R and agreed to by Stockwell, then do us all a favor:

    Where did B&R go wrong? You’re claiming to be the stats expert here. You’re the one claiming that Tamino (who has a new paper in GRL just out) doesn’t understand 1st-semester statistics.

    Back up your claims to statistics guru status by identifying where B&R went wrong.

    Or admit that perhaps you *are* claiming that a whole bunch of physics has been proven wrong by B&R …

  425. dhogaza Says:

    And I see that VS is back to his insulting ways again …

    On your ‘science’ blog, to which you link

    Eli Rabbett is a PhD chemist and professor. Chemistry is science. You can guess my opinion of economics, an opinion you’re strengthening with almost every post.

  426. dhogaza Says:

    Eli says:

    One significant point lost in the statistics, is that the most of the pre-1960 or so forcings DO NOT HAVE ANNUAL RESOLUTION. Some of them are pretty thinly spaced, like a decade or more, and have simply been interpolated. You have to dig down about two levels in GISS and then link out to the actual source to see this. For example Ethridge, et al. on CH4

    Except for the strat aerosols, everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero, and yes, VA, in that case, first or second differences ARE equivalent to differentiation. The strat aerosols have their own issues

    AFAECS, this knocks a lot of the statistical flim-flam into a cocked hat.

    VS claims that his earlier post shows that Eli is wrong.

    VS: provide a reference to a paper showing that he’s wrong about the effect of averaging.

    Also, you’ve ignored the fact that the earlier forcing data is of relatively poor quality in many cases. I was unaware of the heavy averaging and instances of interpolation, but had been pondering just when to ask you how you can statistically treat all of this data uniformly when measurement error increases greatly as you go further back for some of the data, and that the increases aren’t uniform across the different things being measured.

  427. dhogaza Says:

    Well, I don’t have a lot of time to poke around, but I am finding references claiming that smoothing does indeed make it more difficult to reject a unit root test. So Eli’s claim that …

    everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero

    Can cause difficulty does seem pertinent.

    Perhaps this professor isn’t as dumb as VS claims. Perhaps Tamino isn’t, either.

    And perhaps B&R are as wrong as those who understand the physical implications are trying to point out …

  428. dhogaza Says:

    VS:

    I think it ought to be clear why I’m not debating anything with individuals of this ‘caliber’.

    Since his identity has been exposed more than once on this thread, this is who VS won’t debate.

    I’d be careful of debating him, too. He clearly knows his stuff.

    VS – where’s your CV and what’s your real name, since you went out of your way to “out” the wascally wabbit and tamino early in this thread?

    Me, I just have a humble BS in CS, though since I took my senior sequence and did my senior project in my freshman year, I did take pretty much every graduate level course remotely related to computer science my school offered.

    Oh, well, time to see if VS will take the time to identify where B&R has gone wrong. Thus far, he’s only made claims about them being right …

  429. Al Tekhasski Says:

    Bart wrote:
    “The earth climate remains constant if in- and outgong radiation equal each other”

    No, it is not necessarily true. Climate (== spatially-distributed surface-atmosphere system) may perfectly fluctuate even if average energy flux across the system remains unchanged. First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia, and second, various spatial distributions of surface temperatures may have different global temperature but the same average emission.

    [Reply: I include the oceans in my notion of climate. E.g. ENSO shuffles energy around and thereby influences atmospheric temps without an energy imbalance at TOA. It does not however change the total heat content of the earth system. A radiative forcing such as from GHG, aerosols or solar does. BV]

  430. David Adamson Says:

    “VS – where’s your CV and what’s your real name, since you went out of dho please get back to trapping and ringing raptors, this advanced statistics is obviously way too much for a simple BS. ”

    Et tu Brutus!

  431. dhogaza Says:

    [Edit. Calm down.]

  432. dhogaza Says:

    David Adamson:

    So what, are you proud that you can type “dhogaza” into google? Does it make you feel superior that you did a Comical Tony Watts style “partial outing” rather than a full VS-style outing?

    [edit]

    Christ, anyone who can type “dhogaza” into Google will find my personal information. I use the handle because …

    1. I like it

    2. It’s unique, people who want to learn who I am can find out (though in the future I may reconsider this, because of assholes like you)

    3. Who is VS? Inquiring minds want to know. Why does he hide?

  433. dhogaza Says:

    More complete disclosure, [edit]

    email: dhogaza@pacifier.com
    website: donb.photo.net
    professional website: openacs.org
    ethnicity: 75% German, 25% Dutch (as best I can determine)
    religion: fallen methodist
    HWP?: possibly, given my age

    what else do you want to know, [edit]?

  434. dhogaza Says:

    And, oh yes, when I was in my late 30s and early 40s, I was one of the top raptor trapper/banders in the world.

    this advanced statistics is obviously way too much for a simple BS

    And, ignoring your ignoring of my explanation of my university degree … apparently it’s too much for VS.

    His endorsement of a statistical analysis by B&S that essentially says much of modern physics is wrong, is simply stupid.

    And your creative way of trying to defend it … is beyond stupid.

  435. HAS Says:

    Re my comment at March 18, 2010 at 21:07

    [Reply: The issues discussed here have no bearing on GCM’s. Why do you bring them up? BV]

    Because if the answer to the two questions I ask is in the afirmative the issues discussed here constrain the classes of models that acceptably describe the world as observed. (There is also an alternative explanation namely that the data series used to derive the results are inaccurate.)

    I’m sure you’re not saying here that climate models are determinsitically created from the laws of physical science, and thereofore to question the results of those models is to question those laws. Although I’m not completely sure given your comment on the post you link to “just as gravity is not falsified by observing a bird in the sky”.

    For me I take the view that while the physical sciences can describee parts of the processes leading to climate taken in isolation, taken as a whole one is dealing with a system of very great complexity and uncertainty. There are many ways to comibine the parts to give the whole, and this is where the fun (in the sense that science is fun) begins.

    Perhaps do me the courtesy of tryning to understanding the implications of my questions, and if you have a view on the answers share.

    [Reply: “models that acceptably describe the world as observed.” Upper panel humand and natural climate forcings. Lower panel natural forcings only. BV]

  436. dhogaza Says:

    Bart, I apologize for my wrath, but for god’s sake [edit]

    Anyone can google for me. My handle is my brand, and I’ve been on the net forever, so actually “dhogaza” is likely to return more information for many queries than “Don Baccus” (one reason I keep it. plus … I like it).

    Meanwhile, there are serious questions on the table, which Admonson shall we say … tried to fiddle while making a fool of himself.

    1. who is vs? Asked because he’s “outed” two people who post with semi-transparent pseudonyms, while staying hidden himself – totally vile, IMO.

    2. what are his credentials? (note, I related my academic experience before asking the second time – I haven’t revealed my professional ones, but given that I’ve just entered 2.5 months of contract for $25,000 let’s just say I don’t give a crap what people like VS or Admonson care)

    3. Asked for answers for some specific questions, which VS has a tendency to ignore (other than to insult other people regarding their credentials, without revealing his own).

    Blah blah.

    Get down to it, VS. Admonson [edit. No namecalling, swearing etc. BV]

  437. dhogaza Says:

    HAS:

    I’m sure you’re not saying here that climate models are determinsitically created from the laws of physical science, and thereofore to question the results of those models is to question those laws

    Actually, to quite a large degree, they are, no matter how much you want to believe otherwise.

    The monte carlo aspect of individual runs have to do with the setting of initial conditions (which can never be known totally precisely) along with random pertubations ranging over things which can’t be precisely determined (even for an atomic weapon, which was the domain space for which Johnny Von Neumann invented the methodology – note: the hiroshima and nagasaki *did* explode).

    This is the source of the non-deterministic aspects you’re talking about, yet, we know it’s not a problem if proper (non B&R, non VS) statistics are applied (I’m sure VS could provide us a statistical proof that fat man didn’t explode over Nagasaki for instance).

    Anyway, models of this sort are soundly based on physics. If you want to reject climate models on this basis, you must reject those used to engineer and design nuclear weapons. They’re built on common ideas.

    And as we know from the history of WWII, and after-war tests … they do blow up.

    However much you think such models are dumb, stupid, etc … a wide variety of fission and fusion weapons, when tested, *have* blown up.

  438. dhogaza Says:

    Bart wrote:
    “The earth climate remains constant if in- and outgong radiation equal each other”

    No, it is not necessarily true. Climate (== spatially-distributed surface-atmosphere system) may perfectly fluctuate even if average energy flux across the system remains unchanged. First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia, and second, various spatial distributions of surface temperatures may have different global temperature but the same average emission.

    If Al doesn’t understand that he and Bart are saying the same thing, lord help us.

    Other than the fact that Bart’s talking about climate in equilibrium, and Al is muddying by “First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia” talking about climate out of equilibrium.

    And further that Al is pretending that Bart’s statement is assuming “equilibrium will end weather”, which is silly …

  439. HAS Says:

    dhogaza at March 19, 2010 at 05:37

    I trust you understand the irony of saying on a thread where we are debating the underlying statistical processes of a couple of key variables that all is solved by running Monte Carlo simulations to get the initial conditions right.

    I should add that I don’t think that “such models are dumb, stupid, etc …” -your projection – and also that I think it would be relatively trivial to show that compared with climate modelling nuclear fusion or fission are strongly bounded problems.

    If dealing with the world’s climate over the last few thousand years is this easy for the physical sciences, why not go and sort out a few of those intractable problems in the social sciences to show what “hard data men” can really do. I understand I should now say “sarcasm off”.

    My point is: we should have some humility about the ability of the physical sciences to explain extremely complex real world phenomena.

    [Reply: Indeed. But noone ever said it’s easy or that we know it all. The same humility should reasonably also be expected from people who attempt to criticize a whole scientific field as being either ignorant of fraudulous in sudden and systemic manner. BV]

  440. Al Tekhasski Says:

    dhogaza writes: “If Al doesn’t understand that he and Bart are saying the same thing, lord help us.”

    No, we are not saying the same thing. In climatology speak, “earth climate” is synonym to “global temperature [index]”. Therefore saying that “climate remains constant” is equivalent to saying “constant global temperature”. I am saying that infinite number of climates (with different global temp index) may have the same OLR. And zonal climates can “walk” while having the same total OLR, in balance with total insolation, while the global index will vary. Do I need to spell more, like “sigma*T^4”?

    [edit]

  441. David Admonson Says:

    Dho,

    See
    “# dhogaza Says:
    March 13, 2010 at 06:18

    That’s a funny response, which I thoroughly enjoyed, Jim :)

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work”

    you said so yourself ! Remember?

    BTW I thought I had deleted my remarks before the latin, Unfortunately I was diverted and hit the submit, I apologize for my sarcasm.
    I look forward to yours.
    Note my name is Adamson not Admonson

  442. Tim Curtin Says:

    Bumblebee asked: “Who is VS? Inquiring minds want to know. Why does he hide?” Perhaps for same reasons as you until you were outed at NYT.
    But back to your physics. That is not in doubt, but what is uncertain is the practical significance of what increasingly appears to be no more than a trivial theoretical curiosum.
    For example, Jeffrey Kiehl of NCAR (GRL 28 November 2007) concludes after running n models that “the range of uncertainty in anthropogenic forcing of the past century [by a factor of two] is as large as the uncertainty in climate sensitivity…” and earlier that “the total forcing is inversely related to climate sensitivity”. Ye gods!

    I had gathered from you et al that the science is settled without a smidgeon of uncertainty – yet Kiehl here admits he hasn’t got a clue. Typically, despite being at NCAR, his paper contains not a single piece of evidence on any issue and least of all to show that the physics stacks up.

    Back to your raptors, they have more sense.

    [Reply: Don’t set up strawmen for knocking down. Noone claimed that “the science is settled without a smidgeon of uncertainty”. I would argue that the main tenets are reasonably clear, and there is a lot of uncertainty in details. What I strongly argue against is the implict claim of many that uncertainty is the same as knowing nothing. It’s not. Besides, more uncertainty measn higher risk. BV]

  443. VS Says:

    Hi guys,

    Under Bart’s new entry there are some very interesting references posted. I would really appreciate if people would post them here.

    This reference to Kaufmann et al (2009) also looks like a proper statistical set up (I still have to read it properly!). Nice one Alex Heyworth :)

    There was also a question in the other thread, posed by Alan, asking if I have proposed anything ‘new’. The answer to this question is no. Not really, at least.

    I just mentioned a single implication of this body of literature, namely, that given the (established) presence of a unit root, simple OLS based (multivariate or not) inference is invalid. This includes OLS based trend estimation and calculation of the relevant confidence intervals. Note that this is not a matter of opinion, but rather a formal result.

    Now, I haven’t seen anybody making that case clearly, and considering this, I believe somebody should.

    Also, I performed a Zivot-Andrews test in order to compare the null hypothesis of a unit root with the alternative hypothesis of a trend stationary process with a structural break in the sixties. I haven’t seen that one yet in the literature, although there is a good chance I simply missed it.

    Note that the endogenous breakpoint method indeed finds the hypothesized break in 1964, so no inconsistency there with what people have ‘eyeballed’.

    Again, we infer that the series contains a unit root.

    No grand innovations here, just statistical consistency, and a good deal of objective testing.

    Cheers!

  444. Kweenie Says:

    [edit]

  445. Alan Says:

    VS remarked:

    Now, I haven’t seen anybody making that case clearly, and considering this, I believe somebody should.

    The link is to Google Scholar with “OLS trend temperature climate” as the search term and returning 531,000 hits.

    VS, I wonder where you are going with this … I really do.

    Having observed this thread, I reckon that you are implying that there is a huge body of research, by a vast number of research teams, that could be compromised because, hey, they aren’t sharp enough to get their data analysis methodology right. And, in particular, this lacking appears to be the case in climate science.

    You may truly believe this and you say someone should “make that case”.

    You have road-tested your argument here. Where to from here?

    Seriously, there is little point that I can see continuing here … if you feel strongly go and put your case into a robust proposition and submit it to a peer-reviewed journal. A blog isn’t the place – despite its attractions.

    Are you sufficiently convinced that you have found a critical flaw in climate science and committed enough to “make the case”?

    If you are, that’s great … I will await the outcome keenly.

    If not, then this will become pointless flummery.

  446. Geoff Cruickshank Says:

    Thanks VS
    I learned some interesting things from your explanations.

  447. Jimmy Haigh Says:

    [edit] Dhogaza says:

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work”

    Now we use windmills.

  448. Bart Says:

    ALL: Play the ball, not the person. And say something substantial, or don’t say it at all.

  449. VS Says:

    Guys,

    Just for the record. I firmly believe in open and transparent science.

    Save a few incidents, I have thoroughly enjoyed this thread, and I think that the contributions of (most of the) unit root ‘skeptics’ are really raising the bar.

    Consider this full fledged peer review. There is nothing as effective as a skeptic looking over your shoulder while you’re doing your analysis.

    Given that, I truly don’t understand people are calling for me to ‘stop’ doing it here. We have access to a huge online community of experts. I welcome their opinions.

    Now, I have indeed claimed that a lot of the OLS trend analyses performed in climate science is incorrect, from a statistical point of view. I believe I have both the formal science and the test results on my side.

    Note also that when cointegration entered the game in the 80s, it buried a whole body of previously published macro-economic articles. You can imagine that the authors in question were not amused, but they conceded.

    That’s how science works.

  450. Tim Curtin Says:

    Hi BV: you replied to my last Don’t set up strawmen for knocking down. Noone claimed that “the science is settled without a smidgeon of uncertainty”. Really? I could name a lot of names, not least at the ANU here, eg Frank Jotzo co-author of the Garnaut report, and Garnaut himself passim even before his Report came out.

    BV added: “I would argue that the main tenets are reasonably clear, and there is a lot of uncertainty in details. What I strongly argue against is the implicit claim of many that uncertainty is the same as knowing nothing. It’s not. Besides, more uncertainty means higher risk”.

    I suggest you read Skidelsky on Keynes (2009), risk and uncertainty are not the same thing at all. Insurers against risk (eg death) generally fare better than banks in financial crisies, because their risks are actuarially based, at least until they move into banking like AIG which took on all kinds of ‘risks’ that were actually uncertainties (like political risk as in Greece) – “the use of ‘risk’ to cover uninsurable contingencies [like climate change] conveys a spurious precision” – like the spurious correlations that VS has demonstrated.

    Kiehl’s paper remain rubbish, at least until you mount a better defence using evidence rather than his joke models.

    [Reply: Another strawman. I didn’t say uncertainty and risk are the same. I said that more uncertainty in the case of climate science (eg in climate sensitivity) means a higher risk (because the chance of catastrophic effects increases). BV]

  451. Paul_K Says:

    VS,
    Thank you for some fascinating, high quality input. I admire your energy, if not your patience. I would suggest that you take your own advice, stick to your subject matter, ignore the [edit] individuals who go straight to ad hom in the absence of anything useful to say.

    I would also caution you against trying to defend the G&T paper. I recognise the context in which you made your comments. At the same time, I would say that one of the primary conclusions of the G&T paper – breach of the 2nd law of TD by AGW theory – is, to put it mildly, highly questionable. Your detractors will attach your name to a support of the paper, which is by no means well supported by scientists on either side of the AGW debate, and forget the context in which you made your comments. I believe that that would be a great pity because your stats comments have been invaluable in my view, and should not be diminished in such a way.

    I fully support your quest for rigorous statistical methodology.
    Let us hope that over time, climate scientists themselves will see the essential need to respond to the challenge that they must upgrade their understanding and application of statistical theory.

    In my view, this is long overdue, not just in the methodologies for testing correlations in time-series, and very obviously in paleoclimatology, but perhaps more importantly in the “cause and attribution” studies founded on tuned GCMs and summarised in IPCC AR4.

    It genuinely surprises me that several posters here, who lay claim to some sort of scientific background do not grasp the critical importance of statistical tools for the expression of confidence in the validation of ANY mathematical model or hypothesis against empirical or experimental data. The central argument against you (apart from poor understanding of TSA) seems to be along the lines that we don’t need statistics because the answer is already there in the physics.

    Without some such attempt at rigourous quantification of uncertainty between and within models, and using the best tools available, the results will always remain a matter of faith in the absolute rightness of the underlying governing equations, as well as faith in the translation of such equations into a phenomenological FD form with all of the potential sources of error implied by such translation process.
    This of course leads to a dangerous circular argument:- we know that the models are right because the physics are right, and the physics must be right because the models say that there is no other explanation; there is no other explanation because we know that the physics are right.

    Well, it may even be true, but the logic supporting it is fallacious. I know of no way to break into this fallacious logic other than (a) a willingness to consider and test other models which might explain the observations better (to demonstrate that they really don’t!) and (b) the rigourous application of statistical tools to sort the wheat from the chaff.
    Before anyone screams at me that I must be a denier of basic physics, I would pose a serious question from my own personal list of uncertainties in the physics:-
    I can apply Beer-Lambert to a 100% CO2 phase and find Einstein A and B coefficients without too much difficulty. CO2 lasers have been around for quite a while. Now can anybody point me to experimental data that tells me how to calculate the Einstein B coefficient for CO2 (or directly estimate the degree of kinetic thermalisation on a >10m scale) to be expected in a known mixture of gases which include other dipolar and diatomic molecules? The only experimental data I have seen (Heinz Hugg in 2000 or 2002?) suggested a variation of scattering with composition – something which, as far as I can tell, is not accounted for in any atmospheric radiative model. Does anyone have a reference to any updated experimental data?
    Are we so sure of the physics that we can abandon statistical discrimination between models?

    [Reply: This whole discussion has no bearing on climate models (GCM’s). It has bearing on OLS regression. BV]

  452. Adrian Burd Says:

    VS,

    Please correct me if I’m wrong, but your argument seems to be that the statistical methods used to estimate temperature trends and confidence estimates of those same trends in climate data are invalid (because of the presence of a unit root). This seems to be an important point – physics aside for the moment.

    So, how do we correctly estimate the trend, or is this impossible to do, given the data at our disposal? Is it impossible to say, from the present data alone, that global average temperatures have been increasing? Is it also impossible to say, from the data alone, that one of the major factors influencing any trend in global temperature is the rise in atmospheric CO2? I think these are really important points if this discussion is be carried forward in any meaningful way.

    I appreciate that the following will get statisticians all a-twitter, but most scientists approach a problem from (at least) two directions – the data and theory. This seems to be particularly true in the environmental sciences. Hopefully these meet in the middle.

    Adrian

  453. dhogaza Says:

    It genuinely surprises me that several posters here, who lay claim to some sort of scientific background do not grasp the critical importance of statistical tools for the expression of confidence in the validation of ANY mathematical model or hypothesis against empirical or experimental data. The central argument against you (apart from poor understanding of TSA) seems to be along the lines that we don’t need statistics because the answer is already there in the physics.

    No, it’s the fact that the conclusions arising from this particular analysis can be shown to be non-physical. Therefore, there’s an error somewhere in the analysis. It’s not that statistical analysis is unwanted, it’s that *erroneous* statistical analysis is unwanted.

    Let the self-proclaimed statistics expert figure out where B&R went wrong. He’s mysteriously silent about it.

    So, again, VS: where did B&R go astray? They’re wrong. Their results are physically impossible, and the impossibility has nothing to do with climate science specifically.

    Show us where they’re wrong and perhaps you’ll earn some respect.

  454. KenM Says:

    VS,
    Please allow me to add to the chorus thanking you for your contribution here. I’ve learned a lot.
    I keep going back to Tamino’s latest analysis, where he applied climate forcings as a covariate to the ADF test – the ‘CADF’ test.

    This struck me as a somewhat nonsensical proof of an absence of a unit root, since the climate forcing numbers are derived to explain the fluctuations in the temperature record. How could he *not* get the answer he wanted?(!)

    You touched on it briefly, mentioning that the climate forcings themselves contain a unit root, but I’m wondering if you might care to expand on why “climate forcings” make for bad covariates?

    [Reply: ? Of course it makes the most sense to use our estimate of climate forcings as the underlying forced trend. Actually then you’d still have to account for phenomena such as ENSO which are strictly speaking not a climate forcing but work to redistribute energy across ths system (notably ocean-atmosphere), and as such do influence the atmospheric temperatures. I’d be curious to the test results in those case (as indeed Tamino did with the net forcing). Climate model output could perhaps also be used, since it accounts for how the forcings actually influence the temperatures. BV]

  455. KenM Says:

    dho, you mentioned tamino has a new paper out in GRL. I don’t find it. Which issue?

  456. Scott Mandia Says:

    VS,

    I wish you to keep posting here and I thank you for your time. There is nothing wrong with a dissenting view if you can back it up.

    As I stated much earlier in this thread, I still think we need to use the precautionary principle with regard to emissions. We have nothing really to lose by limiting GHG emissions and almost everything to lose if we do not. Of course, “we” meaning the average person and not those in the fossil fuel and related indistries.

  457. Shub Niggurath Says:

    Mr Scott Mandia
    Using the ‘precautionary principle’ is a step-down from the earlier position of the anthropogenic camp. It goes from “we are the ones causing the warming” to “lets shut down emissions anyway”. Right now, a lot of the scientific intelligensia (re: Lindzen’s entire east and west coasts remark) think this way. Which raises the question – why do you need the theory of AGW for?

    Scientists using the ‘precautionary principle’ – which is nothing but nonsense masquerading as reasonable scientificality – raises the sceptre of a lot of scientists having arrived at their AGW promotion via their environmentalism.

    If actions due to the theory of anthropogenic warming flow from its scientific conclusions, one should NOT do anything about ’emissions’ if the theory does not hold up.

    The theory of anthropogenic warming should not be used to derive support or funding for alternative energy sources.

    Regards

    [Reply: ? Nobody “needs” AGW theory. Scientists just try to understand the climate system. Bart]

  458. KenM Says:

    Of course it makes the most sense to use our estimate of climate forcings as the underlying forced trend. Actually then you’d still have to account for phenomena such as ENSO which are strictly speaking not a climate forcing but work to redistribute energy across ths system (notably ocean-atmosphere), and as such do influence the atmospheric temperatures. I’d be curious to the test results in those case (as indeed Tamino did with the net forcing). Climate model output could perhaps also be used, since it accounts for how the forcings actually influence the temperatures. BV

    Actually, it makes no sense at all to use this forcing as the covariate (underlying trend), since essentially that is what is being challenged.

    I say it’s a random walk (I don’t really, just for argument’s sake).

    You say it’s not, and the random appearance is explained by these forcings “X”.

    You say CADF proves it when (and only when ) you use those forcings as the covariate.

    The forcings were created to explain the fluctuations in the temperature record. Some are better than others. Some, like aerosols, are suspect. e.g. I can create my own aerosol estimates, modify the climate forcings Tamino used accordingly, and voila – the unit root is back.

    [Reply: You’re trying to create an image of circular reasoning which is not there. The climate forcings are not being challenged here at all. And they are not fitted to the temp record either. They are based on a combination of observations and radiative and other physics. Deal with it. BV]

  459. dhogaza Says:

    dho, you mentioned tamino has a new paper out in GRL. I don’t find it. Which issue?

    Hmmm, it’s been accepted, perhaps it hasn’t appeared yet. GF et al 2010.

  460. Ian Says:

    KenM, are you assuming that the forcings are derived from the temp trend itself? If that were true, I think you’d have a point – but the forcings aren’t derived by trying to find a fit with the temp data.

  461. Rattus Norvegicus Says:

    The paper is actually going to be in JGR.

  462. dhogaza Says:

    RN: oops, my bad, thanks for the correction.

  463. KenM Says:

    KenM, are you assuming that the forcings are derived from the temp trend itself? If that were true, I think you’d have a point – but the forcings aren’t derived by trying to find a fit with the temp data.

    If they did not, then how did they measure the mass and mixing ratios of atmospheric aerosols from 1945? It was my assumption that they created models with adjustable parameters including the mass and mixing ratio of aerosols. They adjusted the masses and ratios, applied the proper physics, and then confirmed the theory by noting the effect it should have had on temperature.
    Obviously if the expected effect did not match the observed change in temperature, they would have to :
    a) come up with a supplemental theory (something other than aerosols)
    b) change the mass and or ratios of aerosols in the model
    c) something else I can’t think of

    My cursory review of the literature suggests (b).

    [Reply: Knowledge of industrial output and typical emissions of said industries; knowledge of emissions of SO2 and other aerosol precursors and the relation with aerosol properties. There are observations and physics behind the whole story, despite you assertions to the contrary. BV]

  464. Paul_K Says:

    Dhogaza,
    You wrote:
    “No, it’s the fact that the conclusions arising from this particular analysis can be shown to be non-physical. Therefore, there’s an error somewhere in the analysis. It’s not that statistical analysis is unwanted, it’s that *erroneous* statistical analysis is unwanted.”

    It would help me to understand your perspective if you could be specific about what conclusions you believe to be non-physical. Thanks

    [Reply: My take on that question is here and here. Basically, temps being a random walk is inconsistent with energy balance considerations (a.o. conservation of energy). BV]

  465. Scott Mandia Says:

    Shub,

    You missed my point.

    In the extemely unlikely event that somehow AGW is wrong, there is still nothing to lose by reducing carbon emissions and becoming more energy efficient.

    In the extremely likely case that AGW is correct, then doing nothing about reducing emissions will be a great tragedy.

    You should take a few hours/days to watch the Manpollo videos:

    http://manpollo.org/education/videos/videos.html

    A few quotes that drive the point home:

    “What’s the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?” – Nobel Laureate Sherwood Rowland (referring then to ozone depletion)

    “Scientific knowledge is the intellectual and social consensus of affiliated experts based on the weight of available empirical evidence, and evaluated according to accepted methodologies. If we feel that a policy question deserves to be informed by scientific knowledge, then we have no choice but to ask, what is the consensus of experts on this matter.” — Historian of science, Naomi Oreskes of UC San Diego

    “We built an entire foreign policy based on responding to even the most remote threats. Shouldn’t we apply the same thinking to a threat that is a virtual certainty?” — Daniel Kurtzman, polical satirist

  466. Paul_K Says:

    Dhogaza,
    You also wrote:-
    “So, again, VS: where did B&R go astray? They’re wrong. Their results are physically impossible, and the impossibility has nothing to do with climate science specifically.”

    Again, can I ask you to be specific about what results are “physically impossible”? Your previous questions on the subject appear to have been asked and answered. What is still outstanding for you? Thanks

  467. MikeN Says:

    Wasn’t that the logic behind nuclear winter? I’m not going to say the science is wrong, because who wants to support nuclear war? So I’ll endorse the idea that a nuclear explosion will lower the planet’s temperature by as much as 35C.

  468. dhogaza Says:

    Again, can I ask you to be specific about what results are “physically impossible”? Your previous questions on the subject appear to have been asked and answered. What is still outstanding for you? Thanks

    Answered by who? I ignore/don’t read Tim Curtin, you should to.

    The claim that 1 w/m^2 forcing from CO2 will lead to only 1/3 as much warming as 1 w/m^2 from solar insolation is *absurd* and *unphysical*.

    The fact that a CO2 molecule’s ability to absorb IR “fades” quickly is *absurd* and *unphysical* (if it were true, it would require an ever-increasing amount of CO2 just to maintain the planet at its current temperature, all things being equal).

    You can look this stuff up … or go ask some physicists on a physics forum.

  469. David Admonson Says:

    dhogaza Says:
    March 19, 2010 at 05:11
    (edit)
    Dho,
    I did read your [edit. Pot, kettle] posting BEFORE it was edited, for which I am waiting for an apology.

    VS thank you for your contribution and patience, you are a gentleman.
    I think KenM has asked an important question “you touched on it briefly, mentioning that the climate forcings themselves contain a unit root, but I’m wondering if you might care to expand on why “climate forcings” make for bad covariates?”
    Would you care to comment?

  470. adriaan Says:

    adriaan Says:
    March 18, 2010 at 01:33

    @nigguraths,

    That is because the model derived results are openly discussed as being as important or even more important than actual observations. This point has also been raised by VS in one of his first posts on this thread.

    @Bart,

    The observations are not in agreement with the physical model, so the observations must be wrong.

    What can I do?

    [Reply: Strawman argument. No scientist has made such a claim. The observations *are* in agreement with the physical models. Hansen states in pretty much any talk he gives that our understanding of climate change is based on three pillars: Current obervations, paleodata, and physics based modeling. He notes that the former two are the most important. BV

    The essential message of what VS was telling us is that even if the models are in agreement with the observations, that does not prove that the models are right. And that Hansen tells so every time is for me reason to look further (personal motive).

    What VS is telling, is that by treating the problem in a different way, as being a random walk problem, that the apparent correlation between increase of CO2 level and temperature rise becomes not significant. And even if the physiscs is firm, stable, solved, whatever, this is not true, and will not become true. You can not end a discussion by stating that the science is settled. I am a biologist, we are rewriting our science by the day. What was true yesterday, is proven false today. The science can not be settled. If one receives signals that a given interpretation of data can also be interpreted by different means and give different conclusions, then you ougth to revise your theory. And this is no strawman argument. I can think for myself. Which is something a lot of people apparently cannot ot simply refuse to do.

    [Reply: VS has later clarified that he did not mean that temps are a random walk. Only that they contain a unit root, which has consequences for OLS regression. Not for climate models, not for AGW. See here.

    VS, will you do me favor and set all these people straight who want to walk away with your thesis here and claim all kinds of things that are unsupported?

    BV]

  471. Ian Says:

    adriaan – surely the whole of biology isn’t rewritten by the day? All sciences have problems and areas that see a lot of activity and progress from time to time, without having to overturn the discipline.

    A few people have mentioned the notion of resolving a conflict between observations. I’m not suggesting that observations in general should be underweighted, but it’s interesting to note that GCMs in several cases were discrepant with observations, and the discrepancies were resolved in favor of the models once better data were available. For instance, CLIMAP ocean temps were revised down, close to model results, and MSU data suggesting cooling (or less warming) were corrected, bringing them in line with models.

  472. adriaan Says:

    @Ian,

    What I am objecting to is the fact that something like CGMs are complicated sets of rules based on physiscs, but that the ensemble of rules is treated as being as physics. It is not. In biology, not the entire set is rewritten everyday, but we find everyday new interactions between already known components. How can you be sure that within your models (and they are models, nothing more) new interactions can reveal new findings? No model of GCM is able to deal with compartimentalisation of energy. This is one of the major challenges in biology. And I can do thousands of experiments per week. All you are doing is tweaking the parameters of your models to agree with observations. But this is not synonymous with understanding what is actually happening in climate. A model is a primitive abstraction of reality. And it works as long as reality allows so.

  473. Shub Niggurath Says:

    “In the extemely unlikely event that somehow AGW is wrong, there is still nothing to lose by reducing carbon emissions and becoming more energy efficient.”

    Internal combustion with crude oil/gas derivatives are among the most energy efficient modes of power production invented and improved upon. Fossil fuel consumption is the foundation of Western civilization, especially in the the Northern hemisphere.

    Compare that with ‘green’ wind and solar power, for example. Abysmal output, requiring monstrous government subsidies derived for taxation of human productivity which is based on fossil-fuel burning, and most importantly – no input control whatsoever – that’s what these things are. Very energy efficient indeed! :)

    Yes – there is nothing wrong in becoming energy -efficient. You do not need a theory of anthropogenic warming for that. That was my point to begin with.

    Your Daniel Kurtzman and Naomi Oreskes quotes could just as well be turned on their head.

    For example, the Oreskes quote implies I should ‘trust’ the experts’. Given the fact that trust in climate science is at its nadir for well-founded reasons right now, and many of them seem from their own words, to be philosophical lightweights – I think I’ll do fine on my own thank you. I am speaking from personal experience here; just examine the tenor of posting from the AGW camp regulars at RealClimate etc etc – I can name names, and VS – a newcomer to the climate blogs – noticed the same thing. Do you think they inspire trust? None of them sound like experts – more like street-thugs. I’ll say this once more – the latin root of the word ‘doctor’ -is docere, to teach.

    Regards

    [Reply: And what’s the word for ‘learning’? And ‘listening’? And ‘humility’? BV]

  474. ianl8888 Says:

    Thanks for this thread – it was extremely interesting on a large number of levels, and I hope we see many more examples on other aspects of AGW

    I have kept a copy of the whbabcock post (March 17) as the most accurate summary of the various elements at play here

  475. adriaan Says:

    Beste Bart,

    You try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?). Can anyone explain the physical basis of this formula? Dhogaza?

    [Reply: Arrhenius? Tyndall? Or the Rabett. Bart]

  476. MP Says:

    @VS, Bart

    The statistical analysis performed by econometricians lead to two major statements. First, several tests suggest that the global surface temperature time series has an integration order I(1). Secondly, several anthropogenic radiative forcings (ARFs) has an integration order I(2). I think that these findings do not necessary contradict the current understanding of AWG.

    1. I(1) for global surface temperature
    This finding does not mean that global T is a pure unbound random walk, e.g. a deterministic linear trend also has integration order I(1). In fact this would actually fit with a linearly increasing forcing, e.g. log(CO2) in the last 50 years. Because the global T dataset is relatively short and is dominated by an increasing trend it is more likely to find a unit root, if a longer time series would be used the integration order will be I(0). The temperature over longer time scales is bound en therefore stationary (temperature can not run away because of energy conservation).

    2. The integration order of global T (I(0) or I(1)) is not the same as the integration order of the ARFs (I(2)), therefore global T cannot not be determined by ARFs.

    At first sight this statement seems valid, however the variability in global T is not only determined by anthropogenic forcings but is also determined by natural variability like ENSO, Volcanic eruptions, solar variation etc. If variability in global T (first order difference) would be purely determined by ARFs the integration orders should be the same.

    To investigate the above statement I obtained and normalized the time series for ENSO and ARFs
    ENSO:
    ftp://www.coaps.fsu.edu/pub/JMA_SST_Index/jmasst1868-today.filter-5
    sum of anthropogenic forcings :
    http://data.giss.nasa.gov/modelforce/RadF.txt

    Using these two datasets I have created artificial temperature series using T = (1-f)*E + f*F, where I varied the relative contributions of ENSO and ARFs from pure ENSO (f=0) to pure ARF (f=1) and checked the integration order for each T-series using the matlab ADFtest allowing up to 2 lags. I also obtained and normalized the GISS global T dataset and for comparison plotted them together with the artificial T-series. Using 2 lags I also obtain an integration order I(1) for the GISS time series (with no lags I obtained I(0)).

    The results are plotted in the figure linked below:

    The results show that for f=0-0.5 the integration order is I(0), for f=0.6-0.9 the integration is I(1) and only for f=1.0 (pure ARF) the integration order is I(2). This clearly demonstrates that mixed time series will give a mixed integration order. Moreover adding only a little bit of noise to ARF already lowers the integration order from I(2) to I(1) (note the remark by Eli Rabbett). Hence the conclusions by B&R are premature and are not supported by a more detailed analysis of the different sources of variability.

    Furthermore I’d like to note that on the time scale of decades CO2 should show at least an integration order I(1) because humans are adding CO2 incrementally to the atmosphere. This notion is consistent with global T being close to I(1), given I(1) is not a reason at all to assume global T is a pure non deterministic random walk.

  477. adriaan Says:

    With regard to radiative forcings and positive feedbacks, I would like to draw your attention to
    http://www.nature.com/nature/journal/v463/n7280/edsumm/e100128-07.html

    Which to my humble opinion, shows that your positive feedback has been severely exagerated.

    [Reply: Read again. BV]

  478. dougie Says:

    VS, fresh air at last.
    get over to
    http://noconsensus.wordpress.com/2010/03/17/anomaly-aversion/

  479. adriaan Says:

    Is this all physics? Or what is it? Please explain.

  480. Rattus Norvegicus Says:

    Except that the additional CO2 released as a result of increased warming is not the principal feedback. It is the increase in absolute humidity.

    The paper you pointed to is interesting, there is a discussion of it here. Bottom line: Frank’s estimates for the historical period studied — 1050AD to 1800AD — come in at the low end of the previous estimated range and don’t affect the fast feedback estimate of climate sensitivity, which currently is what the IPCC quotes, 2.5C – 4.0C with best estimate of 3C.

  481. tgv Says:

    If radiation in must equal radiation out to satisfy this rather odd version of the 1st law of TD that is promoted here, then how do you account for inductive energy transfer and tidal energy transfer (wobble)?

    I would suggest that it is the height of arrogance for climate scientists to divine what is ‘physical’ and what is ‘not physical’. Much is not known about the energy balance of the earth.

    [Reply: I would suggest that it’s the height of arrogance to claim (without hamepered by evidence or understanding apparently) that a whole scientific field has it radically wrong. Take it elsewhere. BV]

  482. adriaan Says:

    @Rat,

    Whatever, but what they showed is that the magnitude of the positive feedback is factors lower than IPCC has estimated. Or not?

  483. adriaan Says:

    @Rat,

    And they did use the flat hockeystick proxies in their calibrations, which means the factual feedback should be much lower if we take into consideration that there was a MWP followed by the LIA?

  484. Tim Curtin Says:

    Adriaan said “You (Bart) try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?).” No, wrong.

    Adriaan added: “Can anyone explain the physical basis of this formula? Dhogaza?” Not the latter, nor anyone, there is no basis of any kind for it.

    The formula states clearly that IPCC (i.e. authors of WG1, ch.2) has simply chosen to define “the decay (sic) of a pulse (sic) of CO2 with time t” by the formula.
    Atmospheric CO2 does not decay, although as much as 57% of the “pulses” since 1958 (running at c 10.5 Gtc pa) , and at least 15% pa of the basic stock (c 760 GtC in 2000, Houghton J. (2004, p.30) is taken up the global biota in the process of photosynthesis without which not even Bumble would be around. The formula, typical of WG1, does not refer to the circulation of CO2. I would not want any of the authors of WG1 in charge of the stockrooms at Kmart or anywhere else, as what we have with CO2 is a need for inventory analysis, there is a huge turnover, and no “decay”, as CO2 does not wear out, but recycles from the atmosphere to living matter by photosynthesis and back via respiration. The individual molecules never die, as I recall from my primary schooling, “matter can be neither created nor destroyed”, but they can transmogrify!.

    The Bern Carbon cycle model referred to in the footnote cited by Adriaan basically assumes that the photosynthesis process has already or soon will terminate (in the MAGICC version used by WG1), or at least reach a ceiling. In that case mass starvation a la Ehrlich and Holdren will surely eventuate, as devoutly wished by all at CoP 15 in Copnehagen with their determined effrorts to extinguish the “pulses” (emissions) without which we will all die a lingering death.

  485. adriaan Says:

    @Tim,
    You expressed my feelings a bit harsher than I would have done. But you seem to agree that there is no basis in ithe IPCC carbon model, based on the Bern carbon cycle? Is this physics? Or is this not physics?

  486. adriaan Says:

    @Tim,

    If you are right, what are we talking about on this blog?

    I like the approach by VS, he(she) has taught me a lot about how to look at these data. I think I will be drinking a beer with him(her) on a warm, sunny outdoor table somewhere in the Netherlands.

  487. Rattus Norvegicus Says:

    Adriaan, the best evidence we have right now (and it is skimpy, which is why Jones said the jury is still out) looks something like this. Compare this with the second chart in this post which shows the spatial extent of today’s warming against the same base period.

  488. VS Says:

    Bart,

    I take it that with net CO2 forcings, you mean the first differences of the CO2 forcings series? (If not, my apologies, and do refer me to the right data/transformation).

    We established here, that the CO2 forcings series in fact contains two unit roots. This means that the series needs to be differenced twice in order to obtain stationarity. In other words, after taking first differences, the series still contains a unit root.

    As for that Covariate Augmented Dickey Fuller test, as proposed by Hansen (1995), and used by Tamino; that one too assumes stationarity of the regressor (or ‘covariate’). In case of CO2 forcings (used by Tamino), this assumption is clearly violated.

    I cannot stress enough how important this is. Here are the textbook treatments of spurious regression (i.e. the consequences of ‘ignoring’ unit roots, in the context of OLS inference) that I found in my bookshelf.

    – Davidson and MacKinnon (2004), pp. 609-610, Regressors with a Unit Root
    – Hamilton (1996), pp. 557-561, Spurious Regressions (very formal treatment)
    – Greene (2003), pp. 632-636, Random Walks, Trends, and Spurious Regressions
    – Verbeek (2004), p. 313, Models with Non-stationary Varibles – Spurious Regressions (undergrad treatment)

    Spurious regressions are furthermore characterized by, from Verbeek (2004): “..a fairly high R2 statistic, highly autocorrelated residuals, and a significant value for beta”. Note that in this case it refers to regressing two unrelated RW variables. The case extends to more complex specifications containing regressors with unit roots.

    Now, while we’re at it, once again, for the record. A random walk implies the presence of a unit root, but the presence of a unit root (what we have established) does not, in it’s turn, imply a random walk process (i.e. brocolli is a vegetable, but not all vegetables are brocolli :).

    Also, for those interested in the landmark paper that wrought all this about, and fetched a Nobel prize in the process, it’s Granger and Newbold (1974), entitled ‘Spurious regressions in econometrics’, published in the Journal of Econometrics.

    At risk of kicking a man while he’s down, I sincerely urge people to stop referring to Tamino’s analyses in the context of this discussion. In all of his blog entries, he implicitly assumes temperatures (and e.g. forcings) to be a trend-stationary process. I think we have, by now, shown this not to be the case.

    Finally, I would like to reiterate that the TSA body of literature is not trivial. The unit roots we have been discussing here over the past two weeks concerns the first chapter in Hamilton (1994), that stretches some 10-15 pages (those pages contain much more in fact). The book itself is almost 800 pages thick, and consists for the most part (70%+) of pure formal notation (i.e. mathematical statistics packed in matrices).

    ————-

    Hi Adrian Burd,

    The ‘answer’ in this case is cointegration analysis. This allows for both forecasts, as well as proper confidence interval estimation, and relations between the various variables of interest. The presence of a unit root doesn’t mean we cannot perform any statistical analysis. It does however imply which statistical method we need to employ (i.e. cointegration analysis).

    I’ll try to find time to make a more elaborate post on cointegration in the near future. However, to give you a spoiler, <a href="http://landshape.org/enm/testing-beenstock/"here's how the BR model specification predicts when it is estimated on the first half of the observations (and projected over the second half).

    Not bad, huh?

    N.B. David Stockwell referred to the specification as ‘Beenstock’s theory’. I would prefer to call it what it is: a model specification.

    ————-

    Hi MP,

    Your point (2) is slightly misleading. The whole idea behind polynomial cointegration, as proposed by BR, is that it allows for I(1) and I(2) series to be related. The different orders of integration only imply that the series cannot be cointegrated linearly, not that they cannot be cointegrated at all.

    A couple of questions w.r.t. your analysis:

    (1) In your ADF test you allowed for a maximum of 2 lags. However, when analyzing the temperature data we found 3 lags to be an absolute minimum. Why did you choose such a low ‘maximum’ level? Standard econometric software packages often set the maximum lag length higher than 10. Note that the matlab function ‘pushes’ the number of lags up to your ‘maximum’. This implies that it would ‘like’ to choose a higher number of lags, and that you have in fact obtained a ‘corner solution’ in terms of IC optimization.

    (2) What does your (test equation) autocorrelation structure look like in terms of Q-statistics?

    (3) What do the Jarque-Bera tests for normality of disturbances indicate?

    (4) What do you get if you allow AIC or HQ based lag selection, with a max lag length of, say, 10?

    (5) Did you try applying the KPSS testing procedure? This would allow for testing ‘from the other side’, and might give us some indication of the robustness of your results.

    (6) And last but definitely not least, would you mind posting all your data somewhere, together with the exact transformations you employed (in terms of collumns of those two matrices you link to), so that we can replicate your findings? I found it hard to infer what you did exactly from your post.

    Please don’t mind the inquisitiveness, it’s ‘professional deformation’. The effort is appreciated!

    PS. It would be interesting to formulate a cointegration model, a-la BR, that allows for the ‘f’ parameter in your analysis to be fitted/estimated, rather than assumed.

    ————-

    adriaan, eindelijk terrasjesweer! Facebook went berserk today.. ;)

    VS

    [Reply: I don’t mean the net CO2 forcings, but the net ‘all’ forcings (ie also including aerosols, non-CO2 GHG, solar, volcanoes, etc). See eg here estimates of the forcings as used in the GISS model. Before the chorus starts bashing this as being model derived and therefore not worthy of attention, please spend some moments reading how they’re actually estimated. Hint: Observations and physics. As I wrote in an earlier in-line reply, there are other factors that influence temp which are not considered a forcing, eg ENSO (which redistributes heat and there) also influence the atmospheric temp without affecting the radiative balance at TOA. Then there’s the fact that the forcings don’t translate 1 to 1 to temps, but due to all kinds of other physical relations that are incorporated in the models they are e.g. ‘smeared out’ in their temp effect. If you’d want to do a serious physics based analysis, these kinds of thins would have to taken into account. That would be a very interesting exercise indeed. Let me know if you’d want to pursue this. BV]

  489. Rattus Norvegicus Says:

    Tim, I would like a cite to a real paper, not some accusation in the press or on a blog, that the CCCC used by MAGICC “assumes that the photosynthesis process has already or soon will terminate”. A quick check around the UCAR site yielded no clues.

  490. Rattus Norvegicus Says:

    VS, it is not net CO2 forcings, it is net FORCINGS, the sum of all forcings both positive and negative.

  491. VS Says:

    Hi Rattus,

    GHG forcings are I(2) as well. Furthermore, they cointegrate into a I(1) series. Solar irradiance is I(1).

    See BR or Kaufmann et al (2006).

  492. MP Says:

    @VS,

    I just wanted to point out the variability in Global T is rather complex and that the different components affect the observed integration order.

    If I find some time I will try to pass you the data and code.

    Regarding f, there are several multi-regression papers that analyse the different natural and anthropogenic contributions to global T. See my comment above
    MP Says:
    March 17, 2010 at 14:40

    I chose 2 lags because I found I(1) with that for GISS. If I choose increasingly more than 2 lags the orders increase progressively…however this weakens the ADF test. Still get a mix…

  493. VS Says:

    Hi MP,

    Information criteria actually capture that ‘trade off’ you sketched (these are so called ‘entropy’ measures). Try performing your analysis with the max length set to 10, while letting the IC’s pick the lag length freely.

    I’m looking forward to your results (as well as your data)!

    Cheers, VS

  494. Rattus Norvegicus Says:

    VS, I suggest you look at CO2 forcing since 1958 using the Mauna Loa data. I would like to see your results, because that data has both a clear trend and little interannual variation.

  495. Rattus Norvegicus Says:

    Umm that should have been concentrations, forcing is log(CO2).

  496. Tim Curtin Says:

    Rattus said: Tim, I would like a cite to a real paper, not some accusation in the press or on a blog, that the CCCC used by MAGICC “assumes that the photosynthesis process has already or soon will terminate”.

    I have documented this before, in a published peer-reviewed paper of my own (Climate Change & Food Production, 2009:1101, available at http://www.timcurtin.com). It shows how Tom Wigley (formerly Director of CRU at UEA) adopted (Tellus 1993) the Michaelis-Menten formulation of a hyperbolic relationship whereby rising [CO2] has an initial beneficial impact, cet. par., on biotic uptakes by net primary production (NPP) that tapers off rapidly and then hits a ceiling whereby further rises in [CO2] have zero impact on either yield or permitting more NPP, which thereby no longer absorbs emissions of CO2, so gross emissions from henceforth equal net emissions. Actually net emissions have been just 43% of gross since 1958.

    Wigley and Enting (CSIRO, 1993) formalised this assumption and it is enshrined in Wigley’s MAGICC model that forms the basis of the WG1 projections of [CO2] from 2000 to 2100 and beyond (WG1, chapter 8, and especially the Supplementary Material), because it limits growth of future absorption of emissions to nil and thereby validates the Madoffian assumption (eg Solomon et al 2007, and in PNAS 2009) that although [CO2] grew only by just 0.41% p.a. from 1958 to now, from 2000 to 2100 and doomsday it grows at least at 1% p.a. if not more.

    Now because Wigley is not an economist, and as so-called economists like Stern & Garnaut never question THE science, the Michaelis-Menten assumption, which is perfectly valid for that tomato now, is not valid for all tomatoes at all times, as varieties improve every year, and there is no law (yet) stopping me from starting to plant tomatoes in that unused area of my veggie patch, despite the MAGICC claim that they can never grow and absorb CO2.

    Apologies, Bart, for length, but a serious question deserved a full answer, and it is not OT, because the rate of growth of [CO2] determines the growth of radiative forcing, and when that is exaggerated, it is not surprising that forward projections of RF from 2000 have already failed to yield the predicted rise in GMT.

    So Rattus, BV, Adriaan, and VS, close down your greenhouses NOW, for whatever you do you will never as per Wigley be able to increase their uptakes of CO2! But some day I hope to join you all in a beer (full of CO2 as it is).

  497. Rattus Norvegicus Says:

    I hate to say this Tim, but E&E? And what you claim in your post is a far cry from “assumes that the photosynthesis process has already or soon will terminate”.

  498. HAS Says:

    The problem with Blogs is that you can very quickly diverted into areas outside the current issue under debate. The issue here is about the empirical properties of two time series central to climate change, and what the implication might be for climate change models.

    Now rather than a straightforward discussion I found myself tripping over some sensitivities about climate models being derived from physics so the statistical issues could have no implications for the models and its results.

    My instinct was that this had to be just nonsense and surprising from those active in the field (I somehow didn’t believe that neither the physical sciences were sufficiently advanced nor computing power sufficient to develop a deterministic model from first principles sufficiently rich to describe and predict climate. I should add that I did know that statistical issues abound in the estimation of the two time series mentioned, and that there was room for improvement.

    So I thought in all fairness I should check at the IPCC AR4.

    And of course the use of statistical parameter estimation abounds in these models, and particularly in those areas most germane to the impact of GHG. In addition to parameter estimation these models are tuned to improve their performance, which also involves statistical analysis comparing model results to actual data (and no doubt the use of the very time series that are the subject of this thread).

    So dhogaza and BV these kinds of statistics have real implications for your science (as a number of other commentators have ably but less directly pointed out).

    VS: Just as an aside since temperature at a location is strongly correlated with temperature close by I assume that the estimation of errors in estimates of grid temperatures could potentially suffer from similar issues as raised by these time series?

  499. VS Says:

    Very good point HAS! We’re not there yet (patience, we’re going through the matter with a snail-pace), but again, that’s a very good point!

  500. Tim Curtin Says:

    Rattus: that is very glib. Name the journal that would publish anything pointing out the impact of cutting atmospheric CO2 on food production? Nature? I tried, see my Note (also on my website) it declined to publish pointing out that the Meinshausens et al in 2 papers in Nature 30 April 2009 explicitly assumed zero uptakes or worse.

    Here is what Nature’s leader endorsing Meinshausens et al and Allen et al (both in Nature, 30 April 2009) had to say;
    “The 500 billion tonnes of carbon that humans have
    added to the atmosphere lie heavily on the world, and the burden swells by at least 9
    billion tonnes a year (sic)” (p.1077), even though the actual increase in the atmospheric
    concentration of CO2 (i.e. [CO2]) recorded at Mauna Loa between May 2008 and May
    2009 was only 1.68 parts per million by volume (ppm), equivalent to 3.56 billion tonnes of carbon (GtC), implying that it is TOTAL cumulative or annual emissions that determine climate change, not the atmospheric concentration that emerges after taking into account net uptakes of carbon emissions.

    Allen et al in Nature 30 April 2009, SI, stated explicitly (SI, p.6) “the terrestrial carbon cycle model has both vegetation and soil components stores. The vegetation carbon content is a balance
    between global average net primary productivity (NPP) *(parameterized as a function
    of atmospheric carbon dioxide, which asymptotes to a maximum value multiplied by a
    quadratic function of temperature rise in order to represent the effect of climate
    change)* and vegetation carbon turnover” (my italics in asterisks). Thus the Allen paper explicitly
    assumes that net carbon uptakes become first zero and then negative as allegedly
    “climate change” reduces NPP.

    So if I cite that in E&E you infer it is not what they said?

  501. Tim Curtin Says:

    Rattus, further to my last, the source of that assumption in Allen et al 2009 is as I said Wigley 1993, who rejects the logarithmic form for porjecting uptakes of CO2 by the biospheres:

    NPP= (No(1+beta*ln(C/Co)) …A1

    in favour of

    NPP = [(No(C-Cb)(1+b(Co-Cb))]/[(Co-Cb)(1+b)(C-Cb))]…A2

    Wigley’s A1 “allows NPP to increase without limit as C increases” (which has always been the case so far, see Curtin 2009) so he says it should be replaced by A2, whose hyperbolic form ensures that NPP reaches a ceiling with respect to increases in [CO2], around 2000 according to WG1, for which there is no evidence, see Knorr W., GRL, 2009, if you don’t believe me. Allen’s contribution is to make it quadratic so we should already be seeing declines in total world NPP. Are there?

    It is A2 and its built-in ceiling on increases in NPP that determines the projections in MAGICC which was developed by T.G.L.Wigley, S. Raper and M. Hulme (all of CRU/UEA) and is available at http://www.cgd.ucar.edu/cas/wigley/magicc/index.html.

    WG1 describes its use of MAGICC in 8.8.2. I have MAGICC and it has no module for overrruling A2, and thus has its limitations as a computer game, which is all MAGICC is, and a very bad one at that.

  502. HAS Says:

    I can’t believe that I actually wrote:

    “I somehow didn’t believe that neither the physical sciences were sufficiently advanced nor computing power sufficient to develop a deterministic model from first principles sufficiently rich to describe and predict climate.”

    I have tried to parse this but have totally failed!

    What I don’t believe is that science can produce the model, nor that computors could model it.

    VS: when you get round to it my searching here started with “Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850” 2005 P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & P. D. Jones and then back from there into “Estimating Sampling Errors in Large-Scale Temperature Averages” 1997 P. D. Jones, T. J. Osborn, and K. R. Briffa. This makes adjustments for intercorrelations, but I’d be interested to know how this stacks up against more recent developments in the field. Also if I understand it right the SE equation used by Jones is derived from precipitation models and data, and depends upon estimates of inter-site correlations derived from empirical relations, without carrying the variances in those estimates through to the estimates of the SE.

  503. IanH Says:

    Rattus Norvegicus @ 05:30
    I hate to say this Tim, but E&E?”.

    Please Rattus don’t go there, don’t try and pretend we’ve not read the Climategate & NASA emails. A claim to authority or lack of in the peer reviewed journals suggests you’ve not read them, or understand their implication.

    [Reply: Referring to E&E is a bit like referring to the Journal of Creation Science as a refutation of evolutionary biology. It’s not even listed in the ISI, the editor said that she’s following her own political agenda, and indeed it’s basically an outlet for anything, no matter how absurd, as long as the message is “AGW is wrong”. BV]

  504. Alex Says:

    It seems that most readers of this blog by now have acknowledged the importance of testing for the presence of a unit root, since the presence of one will have serious consequences for OLS. From time to time though, the unit root issue is still mixed up with the random walk hypothesis. As I stressed earlier, a random walk model contains a unit root, but the presence of a unit root doesn’t mean that the temperature series is a random walk. In my previous post I actually tested the random walk hypothesis and concluded that, on the basis of statistical testing, temperature is not a random walk. I repeat it here, because it is important not to mix these two things up.

    Some have also been asking what the implications are of a unit root for physical or climate theories, since the focus here has been mainly on the implications for statistical tests. Whether a unit root has any implication depends on what the theories say (implicitly) about the presence of a unit root. There are three possible situations.

    Situation 1:
    Theory is indifferent to the presence of a unit root. In this case it would matter if there is a unit root and so tests for a unit root cannot be used to test the theory itself. However, from a statistical point of view it will still be relevant.

    Situation 2:
    Theory excludes the presence of a unit root. In this situation the different unit root tests (if appropriate under the given circumstances) can be used to test the theory. If a test fails to reject a ‘unit root null hypothesis’ or rejects a ‘no unit root null hypothesis’, then this can be taken as evidence against the theory, since it is in clear contradiction with one of its predictions.

    Situation 3:
    Theory requires the presence of a unit root. In this situation rejecting a ‘unit root null hypothesis’ or not rejecting a ‘no unit root null hypothesis’ can be taken as evidence against the theory.

    So no matter which situation we are in, from a statistical point of view we should always test for unit roots. However, unit root tests are only relevant for physical theories in the second and third situation.

    Now in principle it is possible to derive analytically whether a certain theory will have a unit root, though sometimes this can be quite tricky. The way to do this is by specifying the theory in a set of equations. From these equations one can derive the statistical model from which one can derive whether or not a unit root is present or whether there could be a unit root, but that it doesn’t really matter (In econometrics vernacular this is called ‘nested’).

    Several people raised the question whether a unit root was related to the amount of ‘noise’ or ‘randomness’. Maybe it are related terms like stochastic trend or random walk that make people think this, but a unit root has nothing to do with the random part in the model. It’s possible to have a model with an R^2 of 99% (which means that only 1% is random) with a unit root. This is because the unit root is in the deterministic part of the equation and not in the random part. I hope this clarifies why arguments for or against the presence of a unit root on the basis of the amount of randomness are wrong. The only way to establish whether a theory predicts the presence of a unit root is via analytical derivation.

    A last point of concern I would like to raise is when people compare temperature graphs to graphs of a series with a unit root they found on the internet. Most (if not all) graphs I have seen on the internet showing a process with a unit root are a random walk. This is understandable, since a random walk is probably the simplest model with a unit root there is. However, a random walk takes on a very distinctive pattern which can look quite different from other models with a unit root. Just to illustrate the difference plot the following graphs (you can do this in Excel):

    Y(t) = Y(-2) + E
    Y(t) = -Y(-1) + Y(-2) + Y(-3) + E

    where E is white noise. These two models both contain a unit root, but are not a random walk.

    @Bart

    In several comments you stated that a unit root would only have consequences for statistical methods and that it would not affect any theory on global warming, i.e. situation 1. Was this an impression you got from the debate here or did you formally derive this? And if the latter, could you show us how you did that?

    @Adrian Burd

    I agree with you on the part of looking at a problem from two sides. Personally I think the best way is to start with theory and use statistical analysis to test them. To start with the statistical testing and subsequently build a theory feels a bit like cheating to me, though this is definitely not a view shared by everyone within the field of statistics.

    Most trends I have seen so far in statistical analyses of AGW are of a type called deterministic trends. However, if the data contains a unit root, then it has a stochastic trend, which is of a different type than a deterministic trend. Now if the ‘underlying mechanism’ has a stochastic trend, but our model of that mechanism does not have a stochastic trend, we are essentially estimating a misspecified model. This will cause biased estimates, whether or not we would include a deterministic trend. Moreover, as I explained earlier, the presence of a unit root will make many standard statistical tests invalid.

    The way to solve this problem is via a method known as cointegration. It’s a little bit technical to discuss exactly how it works, but most intermediate textbooks on econometrics will probably cover at least the basics of it. Cointegration takes into account that the two series have a unit root, so the analysis is done with a Vector Error Correction Model (VECM). What’s maybe more interesting is that it hypothesizes a relation between the two variables, which can be tested. This way it is still possible to test whether there is a correlation between two variables, while both of them have a unit root. So if both temperature and CO2 would have one or more unit roots, then cointegration is the way to test whether there is a relation between the two. This is exactly what VS has been advocating and, as for what I read here, has been done by B&R, though I must admit that I haven’t had the time yet to read that paper, so I don’t know whether their analysis is correct.

    Alex

  505. Alan Says:

    Allow me to be the first to ask a dumb question.

    If the temperature and CO2 data have one or more unit roots and if temperature is a function of CO2 and insolance and sulphates and feedbacks etc, will a cointegration test between (just) temperature data and CO2 data reliably reveal whether there is a relation between the two?
    [Reply: Statistics can say something about correlation. Physics can say something about a causal relation. BV]

  506. dhogaza Says:

    This is exactly what VS has been advocating and, as for what I read here, has been done by B&R, though I must admit that I haven’t had the time yet to read that paper, so I don’t know whether their analysis is correct.

    Well, you should. Though you can say their analysis is incorrect with certainty, from physics (with considering climate change at all), so the exercise should be … find out where they went wrong.

  507. dhogaza Says:

    without considering climate change at all …

  508. A C Osborn Says:

    Alex, HAS & VS, would the fact that the Global Temperature series being totally un-natural i.e. massaged to death make any difference to whether or not it has Unit Root, or is I(0), I(1) or I(2) etc?
    Has anyone tried the same test on an unadulterated Temperature series form one thermometer?
    [Reply: “One thermometer” cannot measure global avg temp. Keep baseless accusations (adulterated; massaged to death) at the door before entering. Thanks! BV]

  509. A C Osborn Says:

    Should have said
    from one thermometer?

  510. Shub Niggurath Says:

    Mr BV
    You presume only climate scientists can ‘understand’ the climate. And that the rest of us unwashed and cloth-eared masses should just ‘learn’ and ‘listen’ and show humility?

    Any question against the AGW theory from within the community is shouted down. Any questions from outside the questions are dismissively shooed away.

    Those who venture to discuss climate science are on a learning path – meaning they have learnt something. They are all not mindless ignoramuses. I would suggest you stop treating your audience as one. They are probably ahead of the curve of the AGW theory in its other facets.

    “Physics can say causation, statistics correlation” – is an ‘oversimplification, especially in the context of what has been discussed in this thread upto this point and especially in the context of climate science.

    Regards

    [Reply: I don’t presume that “only climate scientists can ‘understand’ the climate”. But I do note that most who claim that AGW is all bunk do so from a logically and physically incoherent argument. If pointing that out makes me impopular with those who love such claims, so be it. BV]

  511. mikep Says:

    For those not prepared to read 500 pages of Hamilton there is a nice informal introduction to co-integration (using random walks as an example) using the case of a drunk and her dog, here

    Click to access Murray93DrunkAndDog.pdf

  512. mikep Says:

    And there is a slightly less fun extension to the multivariable case here

    Click to access amstat.pdf

  513. docmartyn Says:

    “Arthur Smith Says:
    “Considering Earth’s average surface temperature as a reasonable metric (something more along the lines of total surface heat content is probably better, but average T is not a bad proxy for that)

    But the analysis VS is promoting suggesting something very different – that temperature is not constrained at all, but randomly walks up and down all on its own. That can only happen if the climate system is neither stable nor unstable (since we don’t have a Venus-like runaway either) but right on the cusp of stability, with positive feedbacks exactly cancelling negative feedbacks, at least on the time scale being discussed (decades to centuries?)”

    One could have a system with a stable total surface heat content and yet have a highly variable atmospheric temperature, pressure and water content/phase.
    Two or more decade cycles of wet weather or drought are the norm, not odd events.
    VS is being very well behaved; unlike many responders.

    [Reply: No, that’s not what VS is claimign (anymore). Read his newer posts and also the quick rundown here. Whether behavior correlates with being right is an open question btw. I wouldn’t be surprised to see some randomness in that relation. BV]

  514. VS Says:

    Hi docmartyn,

    For the record, I’m not claiming that temperatures are a ‘random walk’. Watch out for that one, it’s a strawman! I’m claiming the instrumental temperature record contains a unit root, making regular OLS-based inference (including trend confidence interval calculations) invalid.

    I think this post by whbabcock here and the subsequent post by Alex just above are a good indication of my methodological take on the issue.

    I encourage you to read the whole thread though :)

    Hi mikep

    Do you happen to be the author of this book? :)

  515. docmartyn Says:

    VS, I am a neuro/biochemist and would never put words into someone else’s mouth. I just love the way equilibrium thermodynamic has been applied to a steady state system. For instance, he black body temperature of the Earth is 5.5 °C, and as the Earth reflects about 28% of incoming sunlight, in the absence of the greenhouse effect the planet’s mean temperature would be about -18 °C.
    Hence, CO2 and water vapor must, in an equilibrium, produce about 33 ° C. However, at the top of Everest the temperature in the high summer climbing season is about -16 and in winters falls to about -37 ° C, yet the CO2 pressure is only about a third of that at sea level.

    http://www.mounteverest.net/story/ExWebseries-WinterclimbingTheBADchart,part2Dec172004.shtml

    VS have you ever done any steady state analysis?

    We have a good estimate of the amounts of CO2 humans generate by year, the Keeling CO2 data, the 14CO2 residency curves from the H-bomb tests (t1/2 = 12-15 years) and the pre-industrial steady state [CO2].
    It is rather trivial to work out the CO2 influx and outfulx into the atmosphere.
    Sadly, people like rabbit only like box-equilibrium models, equilibrium thermodynamics and statistics that have one dimensional Gaussian variances.

  516. tgv Says:

    “I would suggest that it’s the height of arrogance to claim (without hamepered by evidence or understanding apparently) that a whole scientific field has it radically wrong. Take it elsewhere. BV”

    Nowhere did I claim such a thing. My claim is that *you* have it radically wrong (actually, I think you were just being imprecise with your language :) ).

    My point in that the earth is never in equilibrium (or maybe better said, is only ever instantaneously in equilibrium). Rather it is always seeking equilibrium. There’s a stochastic component to ‘energy in’ due to variation in solar output, wobble, orbital asymmetry, albedo and a whole host of other factors (even small things like kinetic energy that is transferred from meteorites and space dust that is constantly hitting the earth). This, in turn, leads to a stochastic element of ‘energy out’ whose phase shift is also a stochastic function due to the complexities of ocean heat content and other things. Therefore, mean surface temperature (whatever that means) is measuring a complex interrelationship of stochastic processes. There is nothing ‘unphysical’ about mean surface temperature having a random component (while still being bounded). This is different from saying that temperature is a random walk.

    To say that ‘radiation in’ must equal ‘radiation out’ is overly simplistic because it ignores the dimension of time and the irregular nature of the associated temporal distortion.

    [Reply: Weather still happens indeed. BV]

  517. Al Tekhasski Says:

    It is quite audacious to argue that physics should prevent “global temperature” to walk around. Sometimes it is tricky to apply proper physics to complex systems far from equilibrium. I already responded that ‘radiation in’ equal ‘radiation out’ does not mean steady global average of surface temps, but my remark apparently was not appreciated (or understood), and ignored. Let me try again, with a simple example (for non-physicists or others).

    Let a planet to have only two climate zones, 50% equatorial with flat temperature T1, and 50% polar, with T2. Then the following example combinations will give the same OLR of 240W/m2:
    (A) T1=295K, T2=172.8K
    (B) T1=280K, T2=219.4K
    (C) T1=270K, T2=236.9K
    (D) T1=260K, T2=249.8K
    Yet the “global average temperature” will vary from 234K (case A) to 255K(case D), a swing of 21K. That’s a lot of potential for warming, all without ANY change in radiative balance. And a lot of room to walk chaotically, knowing that the atmosphere is a highly volatile turbulent system which, being quasi-2D, should theoretically have a Kraichnan’s inverse cascade, and low-frequency large-area fluctuations are expected, all in accord with physics.

    The above example is another illustration why the “global temperature” is unphysical, and therefore application of basic physics to this “index” may give a misleading impression.

  518. Willis Eschenbach Says:

    First, my thanks to most everyone for a fascinating discussion. My conclusion is that VS (and his citations) have shown that temperature series are I(1) and CO2 is I(2). What that means is still unclear to me.

    Next, I object to the argument that ‘if X is true then much of modern physics is untrue’. For example:

    His endorsement of a statistical analysis by B&S that essentially says much of modern physics is wrong, is simply stupid.

    First, I find nothing in B&S that says “much of modern physics is wrong”. What they are saying is that you can’t use OLS etc. to relate CO2 and temperature. How does that negate modern physics? Second, there is a big difference between “modern physics” on the one hand, and the (possible mis-) application of some part of modern physics to a particular problem on the other hand. Overthrowing mis-applications of physics is quite common.

    Next, dhogaza Says:
    March 17, 2010 at 18:38

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    If it gets in anywhere, I imagine it will be some economics journal.

    Say what? Since different forcings have different frequencies, why would they not have a different response? Consider a 1W/m2 change in solar vs GHG forcing on the ocean. Solar penetrates the ocean to a depth of tens of metres. Longwave is absorbed in the first mm of the oceanic skin surface. Which will cause a greater rise in the skin temperature? Which will cause a greater rise in evaporation? How will those possibly have the same climate response?

    Or you might take a look at “Efficacy of Climate Forcings”, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, by Hansen et al., which says:

    We find a substantial range in the ‘‘efficacy’’ of different forcings, where the efficacy is the global temperature response per unit forcing relative to the response to CO2 forcing.

    Economics journal? … Not.

    Your certainty that your opinion is right is misplaced, which makes your snide comments painful to read. You would do well to follow Oliver Cromwells advice, “I beseech you, in the bowels of Christ, think it possible you may be mistaken.

  519. Scott A Mandia Says:

    Shub wrote:

    Internal combustion with crude oil/gas derivatives are among the most energy efficient modes of power production invented and improved upon. Fossil fuel consumption is the foundation of Western civilization, especially in the the Northern hemisphere.

    Compare that with ‘green’ wind and solar power, for example. Abysmal output, requiring monstrous government subsidies derived for taxation of human productivity which is based on fossil-fuel burning, and most importantly – no input control whatsoever – that’s what these things are. Very energy efficient indeed! :)

    You are making the same mistake that many make by not factoring in the true cost of carbon. For example, the US spends about $80 billion per year far the Navy to monitor the Gulf region, about $80 billion per year in subsidies to fossile fuel companies (far greater than geen subsidies, BTW), and then thet cost of climate change as a result of this carbon is not factored in, etc., etc., etc. So it is always unfair for carbon with its many hidden costs to be compared to renewables which essentially have tranmsparent costs.

    BTW, the geopolitical impact of climate change are aslo typically ignored by many (but not by top US military experts) and these costs are frighteningly large.

    See my page that describes some of these implications:

    http://www2.sunysuffolk.edu/mandias/global_warming/talk_conservative_climate_change.html

    In the “business as usual” solution where emissions of GHGs continue to rise unabated, the following consequences are realistic:

    China and India pass the US as economic superpowers
    Increased immigration
    Higher food costs
    Greater government subsidies (higher taxes)
    Higher insurance rates
    Increased authoritarian governments
    Increased terrorism
    Nuclear proliferation
    Regional and global wars between countries with nuclear weapons

  520. Scott A Mandia Says:

    Sorry for the typos. 3 hours of sleep last night and my 5 and 2 year old boys are tugging at my sleeve! :)

  521. Alan Says:

    Alex wrote:

    This way it is still possible to test whether there is a correlation between two variables, while both of them have a unit root. So if both temperature and CO2 would have one or more unit roots, then cointegration is the way to test whether there is a relation between the two. This is exactly what VS has been advocating and, as for what I read here, has been done by B&R

    I asked:

    If the temperature and CO2 data have one or more unit roots and if temperature is a function of CO2 and insolance and sulphates and feedbacks etc, will a cointegration test between (just) temperature data and CO2 data reliably reveal whether there is a relation between the two?

    I’d prefer a reply from Alex and VS, if you don’t mind, Bart.

  522. J. Bob Says:

    Just jumped over here from WUWT’s discussion on playing with stats. Sounds like it’s time to dust off the “How to Lie with Statistics” book.

    Just a note. It seems very little is mentioned about the real long term data sets such as Central England, the DeBilt, Uppsalla, Berlin. While they may not be up to today’s specs, it seems the accuracy even in the 50’s wasn’t that fantastic. Back then, I had a chance to earn 50 cents (good money back then) a week recording hi/lo temperatures from a neighbor who sent in the results the the government. This was on a old hi/lo Taylor thermometer with mechanical positionable arms that recorded the hi/low, and would have to be manually reset. At best one could estimate to 1 deg.

  523. POUNCER Says:

    http://ideas.repec.org/p/anu/wpieep/9702.html

    Time series properties of global climate variables: detection and attribution of climate change

    Paper provided by Australian National University, Centre for Resource and Environmental Studies, Ecological Economics Program in its series Working Papers in Ecological Economics with number 9702.

    Download reference. The following formats are available: HTML (with abstract), plain text (with abstract), BibTeX, RIS (EndNote, RefMan, ProCite), ReDIF
    Length:
    Date of creation: Mar 1997
    Date of revision:
    Handle: RePEc:anu:wpieep:9702

    The test results indicate that the radiative forcing due to changes in the atmospheric concentrations of CO2, CH4, CFCs, and N2O, emissions of SOX, CO2, CH4, and CFCs and solar irradiance contain a unit root while most tests indicate that temperature does not. The concentration of stratospheric sulfate aerosols emitted by volcanoes is stationary. The radiative forcing variables cannot be aggregated into a deterministic trend which might explain the changes in temperature. Taken at face value our statistical tests would indicate that climate change has taken place over the last 140 years but that this is not due to anthropogenic forcing. However, the noisiness of the temperature series makes it difficult for the univariate tests we use to detect the presence of a stochastic trend. We demonstrate that multivariate cointegration analysis can attribute the observed climate change directly to natural and anthropogenic forcing factors in a statistically significant manner between 1860 and 1994.

  524. Eli Rabett Says:

    VS’s argument that “the instrumental temperature record contains a unit root, making regular OLS-based inference (including trend confidence interval calculations) invalid.” fails, because the proposition he is arguing against is not based on an OLS based inference about the global temperature series. His is rather an acoherent separation of the surface temperature record from everything it is connected to.

    As has been pointed out here, and here and here, the argument is that at all levels of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has multiple observed effects. These include increased global surface temperature, decreased stratospheric temperature from 20-50 km, significantly increased Arctic temperatures and much more (if Eli left out your favorite, please feel free to add it). Moreover these predictions are validated by observations over the short (response to Pinatubo), medium (the satellite era), century (from 1850 or so, when we have instrumental records), millenial (proxy reconstructions) and eonical (ok made that word up, but ices cores, isotope tracers, etc).

    Denialists keep trying to knock these observations down, but outside of blogs and newspapers, it is the denialists that keep getting knocked down, for example the latest nonsense about station location.

    As has been pointed out, the global surface temperature record does not exist in isolation, however VS’s current argument decouples it from everything else and in doing so contributes nothing.

  525. adriaan Says:

    @Bart,

    Beste Bart,

    You try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?). Can anyone explain the physical basis of this formula? Dhogaza?

    [Reply: Arrhenius? Tyndall? Or the Rabett. Bart]

    Beste Bart, Dear Bart,
    Did you take the effort to read the ref? What does Arrhenius, Tyndall or Rabett have to do with the explanation of the cited formula? Your answer is completely O/T. Joos et al 2001 (Glob Bio Cyc) would have been a better reply. And I reread Frank et al 2010 and looked at fig 1 and 2 and the supporting data. But maybe you prefer Patterson et al 2010. PNAS and their supporting data. The big advantage of the method developed by Patterson et al is that it allows to get almost diurnal temperature readings by one of the most accurate proxies available. Their Mass spectrometry based methods in combination with thin slicing of shells is brilliant. But for this discussion Li et al 2009 Tellus B) would be more on topic. The ref to Mann 2009 which also came across by another commenter is laughable.

  526. adriaan Says:

    @VS,

    I would like to have a meeting with you. I think we have a lot to discuss on a completely different, but closely related area. Suggestion on how to arrange this?

  527. adriaan Says:

    @VS

    Het eerste biertje is op mijn rekening!

  528. Dave McK Says:

    I do believe that what was referred to as ‘statistically similar to a random walk’ in this context can be translated as ‘weather’.

    Is somebody saying there is no such a thing as weather?

    [Reply: Note that VS is not claiming (anymore) that global avg temp over 130 years are a random walk. See also here. BV]

  529. Alex Heyworth Says:

    There is one thing that puzzles me about this thread and the corresponding couple at Tamino’s. That is that AGW defenders seem to think that the idea that air temperature over the period of the instrumental record could be hard to distinguish from a random walk is in some way a threat to their theory.

    Given (1) the vastly greater heat capacity of the oceans, (2) our extremely limited present understanding of ocean dynamics and its drivers, and (3) the apparent strong links between variations in ocean surface temperatures and average air temperature levels, it would surely be expected that there would be a large amount of apparently random variation in the average air temperature, even if AGW theory is correct.

    The true confirmation of AGW theory is going to come via measurements of ocean heat content. When we have fifty plus years of high quality OHC measurements, the truth or otherwise of current theory will be apparent. By then, I imagine we will also have a far better idea of why it is correct (or why not, if it is false).

  530. Alex Heyworth Says:

    PS, Bart, I note that in a reply to a comment above, you said

    [Reply: My take on that question is here and here. Basically, temps being a random walk is inconsistent with energy balance considerations (a.o. conservation of energy). BV]

    I would take issue with that, simply on the basis that significant variations in air temperature could take place because of heat transfer between oceans and atmosphere. If one were to take your statement as applying to the heat content of the whole earth (ie everything from the center of the core to the top of the atmosphere) then it would be true. However, average global air temperature could vary randomly within that context without violating any physics.

    [Reply: Did you read my newer posts? I explicitly mention that energy tranfer from different parts of the climate system should othgerwise have contributed, but that is excluded based on them also gaining energy, rather than losing it. BV]

  531. Frank Says:

    BV, VS et al,

    This is a fascinating discussion and well moderated! To summarize, those skeptical of AGW (myself included for purposes of disclosure) have taken to heart statistical analyses that the historical records for temperature and greenhouse gas forcings have unit roots and are of different orders, thereby precluding any AGW-supportive inferences of trend and/or correlation from these records. Conversely, those supportive of AGW dismiss these findings of stochastic trends and spurious correlation outright, since they “know” from the paleo-climate record (e.g. ice cores) that climate is “deterministic”, “bounded”, etc.

    OK then. Let’s consider a Nimitz-class aircraft carrier – something very deterministic and bounded in form and function. Further to the analogy, I’m going to provide a series of 50 lb. samples of the aircraft carrier to people who have no inkling of what an aircraft carrier is or does. How many samples will it take before the people to whom I provide these samples will be able to form an accurate assessment of what an aircraft carrier is and does? Quite a number no doubt. And for those scoffing at the analogy of sampling an aircraft carrier in 50 lb. increments, the surface thermometer record referenced at the top of the thread scales similarly to the paleo-climate record.

    So, AGW supporters can ignore the statistical findings because they’ve already seen the aircraft carrier, so to speak. But here’s the rub – in invoking the paleo-climate record, what becomes more relevant is how unusual the current 130-year temperature record is compared to centennial-scale changes in the former? (Answer, not very). And, how about carbon dioxide’s well documented, consistent lagging of temperature from the ice cores, or inconsistencies between carbon dioxide levels and ice-/hot-house conditions throughout the Phanerozoic?

    In short, AGW supporters can’t have it both ways. They can either accept that the current data doesn’t statistically support their case, or in invoking the paleo-record to prove that the statistics don’t matter, provide evidence that current climatic conditions are unusual in comparison to that record.

    [Reply: The “case of AGW” is not weakened by the presence of unit roots. How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2? BV]

  532. Alex Heyworth Says:

    PPS, further to my puzzlement two comments back, lest people suggest I should be equally puzzled as to why AGW doubters cast the “apparent randomness” of the temperature record as a refutation of AGW, my take on that is that it is a reflection of their lack of statistical and scientific knowledge. I expect better from AGW proponents, particularly those who are scientists.

    I’d also note that the emphasis on air temperature is understandable in the past, given that air temperatures were recorded for other purposes and were available to analyze. However, maybe it is time to think about moving on. For the reasons I’ve outlined above, average global air temperature is not a very good way of measuring what is happening to the climate system, even though air temperature is what we most immediately notice in terms of the environment’s impact on our comfort.

  533. dhogaza Says:

    There is one thing that puzzles me about this thread and the corresponding couple at Tamino’s. That is that AGW defenders seem to think that the idea that air temperature over the period of the instrumental record could be hard to distinguish from a random walk is in some way a threat to their theory.

    Trust me, no one does. The basic argument is whether or not analysis such as B&R’s (which VR endorses and says is correct) can overturn much of modern physics totally unrelated to AGW.

    Because these are the implications.

    B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.

    VS claims he’s not supporting this, yet refuses to tell us where B&R goes wrong (my opinion of VS is that he has just enough understanding to run a bunch of scripts in R, make juicy quotes based on various papers, and to yell “tamino may be a PhD in statistics but he’s an idiot, as are all those who suggest we’re wrong!”). So I accuse VS of supporting B&R’s claims which refute so much of physics.

    Ignore “AGW defenders”, just concentrate on what B&R conclude about the physics of CO2 and LW IR absorption. Ask yourself why laboratory measurements don’t support this. Etc etc.

    Be a skeptic but at least be a smart one, OK?

    I’d also note that the emphasis on air temperature is understandable in the past, given that air temperatures were recorded for other purposes and were available to analyze. However, maybe it is time to think about moving on. For the reasons I’ve outlined above, average global air temperature is not a very good way of measuring what is happening to the climate system, even though air temperature is what we most immediately notice in terms of the environment’s impact on our comfort.

    You’re behind the times, but despite denialist hopes, sea temps seem to be rising, too.

  534. Rattus Norvegicus Says:

    Adriaan, I have to call BS on your cite. Note a in the online version is merely a list of the various groups doing modeling. It is a note to a table and singularly uninformative re: your question. Please provide a link to an online version of the report. here is what I found in WGI, Chapter 2.

  535. David Stockwell Says:

    What an active thread! There seems a lot of concern about the implications of temperature testing I(1), but most of the GCM temperature outputs test I(1) too.

    So being I(1) doesn’t block a conventional origin for the behavior. What is does do is change the critical value for significance tests.

    For that matter, one wouldn’t really be sure if climate models also have the same general behavior shown by B&R unless you actually tested them, as B&R claim cointegration of delta rfCO2 and temperature is the emergent behavior of the system. If the models are any good, they will match the integrative behavior seen in nature.

  536. Alan Wilkinson Says:

    I am amazed, not at the debate since much of this statistical ground was covered in an earlier thread on B&R at WUWT, but at the strange defensive posture of climate scientists (or at least their advocates) when faced with an analytical tool they were previously unaware of.

    I would have expected excitement to discover what new insights this tool could bring but instead we see rage that existing beliefs might be challenged.

    To those, and particularly BV, who say that AGW theory cannot be wrong I say of course it is wrong. The only question is how much and in what ways. That is why it is still an active field of research instead of a dead one.

    Congratulations to all those who have contributed objectively so much to this discussion, in particular of course, VS, Alex and indirectly David Stockwell.

    [Reply: I have not claimed that “AGW cannot be wrong”. I have claimed that claims that it is radically wrong are entirely without base. BV]

  537. Marco Says:

    @Alan:
    Bart most certainly does not claim AGW theory cannot be wrong. And ADF has been used on various occasions, too, so it’s not like climate scientist are totally unaware of it.

    The issue is quite nicely summarised in dhogaza’s answer to Alex Heyworth, which I will make even shorter:
    B&R claim that AGW is mostly wrong based on their analysis, but make ‘predictions’ that do not make physical sense (same forcing, vastly different change in temp and permanent vs temporary) and that go against observations and analysis thereof. While it certainly is possible that AGW is wrong, the B&R analysis actually negates a much broader area of physics. I’d expect a bit more humility from scientists when their analysis contradicts loads of basic physics. It may just as well be the methodology that has a problem with the data.

  538. Alex Heyworth Says:

    The basic argument is whether or not analysis such as B&R’s (which VR endorses and says is correct) can overturn much of modern physics totally unrelated to AGW.

    Because these are the implications.

    B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.

    These statements amount to nothing more than an admission that you can’t think of a way to interpret B&R’s findings that is compatible with physics. This is indicative of a lack of imagination. Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :) No doubt “real” climate scientists could do a lot better than me in suggesting mechanisms, if they were willing to put their minds to it.

    Ignore “AGW defenders”, just concentrate on what B&R conclude about the physics of CO2 and LW IR absorption.

    ie nothing?

    You’re behind the times, but despite denialist hopes, sea temps seem to be rising, too.

    If I’m behind the times in observing that obsessing about air temperature is not sensible, then how come so much effort is devoted to convincing the public that “x is the hottest ….whatever”? Just look at the press releases by NASA, the NOAA and Hadley. Are they even further behind the times?

    Sea temps seem to be rising, too? As confirmed by the AQUA data?

  539. JvdLaan Says:

    @Willis Eschenbach
    THE Willis Eschenbach? http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php – talking about painfull.
    And in the meantime a lot from the WUWT-crowd is now coming in…quite sad giving the nice discussion that was taken place here.

  540. David Stockwell Says:

    The main issue in my mind is whether B&R are right or not. So far I have done two tests.

    1. Since they only used 3 GHG series for forcing, I thought that with more forcings there might be a different result. So I replicated their analysis with all the AGW forcings in the RadF file from GISS. The result was the same as B&R (http://landshape.org/enm/cointegration/).

    2. I wanted to test their result in a completely different way, without unit roots or anything. So I developed a linear model of temperature with natural sources of variation, CO2 and delta CO2. Which ever was more significant tests the result. It turned out that delta CO2 was more explanatory than CO2 – again consistent with their claim (http://landshape.org/enm/testing-beenstock/).

    Its a bit like the saying there are no proven theories, only those that haven’t been disproved yet. So far I haven’t seen any convincing disproof.

  541. HAS Says:

    My feeling increasingly is that a lot of the heat here is caused by the failure of science educators to teach some basics of the philosophy of science.

    Going back in time a bit
    RV in reply to Alan on March 20, 2010 at 13:18 said:

    “Statistics can say something about correlation. Physics can say something about a causal relation.”

    RV this duality you are espousing will stop you being a great scientist. Physics is an empirical science. Statistics is about empiricism; it is your friend and is the tool whereby you can make the move from association to causality. But as you are finding out it is a hard task-master (as it should be because causality isn’t cheap and easy).

    Then Eli Rabett Said on March 21, 2010 at 01:37

    “As has been pointed out here, and here and here, the argument is that at all levels of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has multiple observed effects.”

    I’m sure this is a mis-speak, particularly in light of the rest of the post. Models produce predictions, the real world is observed. I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real.

    Models are abstractions that are useful for their explanatory power. While they work they are great, when they don’t it’s time to get a better one.

    dhogaza on March 21, 2010 at 05:16 demonstrates this lack of understanding of the distinction between models and reality when he says:

    “B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.”

    B&R are saying that the observations as reported by NASA GISS have characteristics that mean that the particular models implied by the various papers by Kaufman et al are wrong. Given that it takes a careful read to see that the conclusions only relate to these papers, and their claims of an impulse effect are somewhat more forcefully stated than might be appropriate (not to mention that this is a controversial area and it probably pays to be somewhat more circumspect), B&R are open to criticism.

    But not because the citric doesn’t understand that rejecting one particular complex model doesn’t mean the end of the world. It happens every day in real science as we stagger on in this grand endeavour to understand the physical world.

    Marco on March 21, 2010 at 09:23 you too should go back and read B&R more carefully and less defensively. The challenge to you is to understand the implications and incorporate them into your next iteration so you produce a more robust model of the climate.

    [Reply: Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter gives an excellent overview of the scientific methods, and how climate science stacks up against it. Slideshow is here (start at slide nr 30 to jump to the philosophy of science part). BV]

  542. David Stockwell Says:

    “their claims of an impulse effect are somewhat more forcefully stated than might be appropriate”

    I would agree with that, and also add that the results might only be saying that the earth/climate system absorbs more energy via impulses than via slowly changing forcings, which is a fairly normal property of a complex system if you think about it.

  543. Paul_K Says:

    Alex Heyworth: Since you raised the issue of our use of near surface temperatures, let me share my thoughts on this, and incidentally partially answer the question posed by Alex in two previous posts. Alex basically posed the challenge:
    Is it possible to put together a suite of governing equations which could be used to predict, and draw inferences about the statistical properties from, the temperature series? AND
    Can one say from theory whether the temperature series should have or should not have a unit root?
    I believe one can say with reasonable certainty that this challenge cannot be met with any confidence for the near surface temperature record, at least as it currently stands. The reason is that any such attempt comes up against a mathematical problem which falls into a class known as “knapsack” problems.
    We can assert that ANY such formulation of physics-derived governing equations must start with an attempt to estimate NET radiative transfer gain to or loss from the Earth’s system. The integration of the resulting power terms in theory allows one to say whether the system has gained or lost heat over a period of time and hence one can attempt to predict temperature at that time (with a myriad of different assumptions). I will ignore at this stage the highly non-trivial issue of estimating how the energy is partitioned within the system at any point in time, since my comments apply even to the simplest models of the Earth’s system.

    Now here is the insuperable problem: thermal emission from the Earth’s surface varies in accordance with T^4 (temperature to the fourth power). So how do we average the Earth’s temperature such that the application of a single temperature term (or a small number of distinct temperature elements if we partition the system into latitudes and sea vs terrestrial) works correctly to estimate the aggregate emission? We expect a valid temperature weighting within each partitioned element(valid in the sense of being consistent with its applicability in the emission term) to look something like the fourth root of an areal weighted average of T^4. However, the current surface temperature record is an areally-weighted average.
    There is a well-known mathematical inequality that relates the two methods of averaging, but, and this is the main point, there is no way to invert the existing areal average into the appropriate average for use in the emission equation. QED It is not possible to derive the equations sought by Alex because the difference terms in the two temperature series will be unpredictably different even if the two series derived from the different averaging methods are (as they must be) strongly correlated.
    This does not preclude the possibility of some energetic individual re-averaging all of the raw temperature series with process-dependent averaging and THEN looking for the statistical characteristics of this newly averaged temperature series, but the conclusion I present here is that one cannot draw inferences about the EXISTING surface temperature dataset(s) directly.
    Incidentally, for those who are fully following this argument, you will also note that in the time domain the difference terms in a “T^4 average” are non-trivially different from those found in an areal-average T. This should not affect the validity of applying appropriate statistical tests to the validity of GCM outputs against surface temperature data, (since the average temperatures in both datasets are computed in the same areal-weighted way), and that methodology does continue to show that the GCMs have little predictive skill. However, it puts a question mark in my mind about the validity of statistical inferences drawn offline as it were when people seek to test correlation of any radiative effect with an average surface temperature.

  544. Kweenie Says:

    “I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real. ”

    Method Wrong + Answer Correct = Bad Science

  545. Alex Heyworth Says:

    David Stockwell

    The main issue in my mind is whether B&R are right or not.

    Even if they are not entirely right, their paper and the following discussions have raised important issues about the application of statistical methods to climate science.

    David Stockwell (later)

    “their claims of an impulse effect are somewhat more forcefully stated than might be appropriate”

    I would agree with that, and also add that the results might only be saying that the earth/climate system absorbs more energy via impulses than via slowly changing forcings, which is a fairly normal property of a complex system if you think about it.

    Indeed, B&R are a bit over the top in the phrasing of their conclusions. (it is only a draft paper, remember!) It will be interesting to see what they say if/when they finally publish.

    Paul K: interesting comments.

  546. Aprendiendo de las discusiones ajenas. « PlazaMoyua.org Says:

    […] https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-a… […]

  547. Frank Says:

    The “case of AGW” is not weakened by the presence of unit roots. How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2? – BV

    Correct! You can neither strengthen nor weaken a hypothesis that lacks evidence by further failing to provide evidence. Let’s review what we have here – surface thermometer records that GISS and CRU have poked and prodded (particularly with respect to UHI effects) in ways that demonstrably enhance the visual appearance of modern era warming. However, an impartial statistical analysis of these series says there’s nothing to see here.

    Re. explaining large temperature changes in the past without a substantial effect of CO2? Not my job, but please feel free to provide evidence that CO2 does explain large temperature changes in the past. The ice core data of the most recent 450 kyrs certainly does not do this, as CO2 lags temperature by about of 800 yrs on average. And while I don’t pretend to know what causes persistent periods of widespread glaciations that have now been going on for about 3 myrs, I’m not aware of anyone that has plausibly suggested CO2 as a causal agent.

    As you are aware, proponents of AGW require that we make dramatic (and expensive!) changes in our lives. Good science says that they therefore need to provide us with dramatic evidence. So far, we have been given nothing.

    [Reply: You’re providing a total caricature of the science. Where do you get your information from? Certainly no from a random walk through scientific sources. Your claim re temperature series having been prodded with is false (see e.g. here about the effects of adjustments. Evidence for AGW, see eg here.

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe. Statsopheric cooling; nighttime temps having warmed more than daytime temps: Signatures of greenhouse forcing.

    CO2 has been an important factor in pretty much all large temperature changes in the earth’s past, see eg this excellent presentation. That includes the ice age cycles, where CO2 was not the initial cause, but a strong amplifying feedback. Without a substantial effect of CO2 you could not explain the amplitute of temp change over the ice age cycles. If you can, you’ll be instantly famous.

    A claim that a whole field of science is radically wrong is an extra-ordinary claim, which needs extra-ordinary evidence. You have supplied none. BV]

  548. Paul_K Says:

    David Stockwell: I read your analysis concluding a higher correlative significance in first differences of CO2 forcing than in the absolute CO2 forcing itself. I would be grateful if you were to take a minute to read my rather long (sorry!) comment above and let me have any thoughts. I believe that your results do go some way to demonstrating why B&R reached their conclusions, but I am concerned that your analysis (and theirs) may be subject to the problem I was attempting to expose in how one averages surface temperatures.

  549. Shub Niggurath Says:

    Stockwell

    “There seems a lot of concern about the implications of temperature testing I(1), but most of the GCM temperature outputs test I(1) too.

    So being I(1) doesn’t block a conventional origin for the behavior. What is does do is change the critical value for significance tests. ”

    This partially answers the question I raised earlier. Thanks

  550. tgv Says:

    “Weather still happens indeed. BV”

    I think we now agree precisely on where we disagree. The issue is all about timescales. That’s why VS’s analysis is interesting when done on a 150 year timescale. It suggests that the random components of the stochastic processes are still largely in play at a resolution of 150 years. (Again, this is different from saying that temperature is a random walk).

    It does not say that CO2 is having no contribution. But it does raise important questions for those who say that there is high certainty that CO2 is the predominant driver of net warming over a 150 year timescale. I more supportable statement would be that “CO2 is contributing to net warning (and therefore should be addressed through prudent public policy), but the degree of its contribution is still uncertain”.

  551. Eli Rabett Says:

    To continue beating my drum. You cannot draw meaningful conclusions from the statistical behavior of a single parameter in a coupled system.

    Even in 1988, Hansen’s argument was more sophisticated, viz: using physical constraints the outputs of the theory follow observation of a number of parameters including global temperature. Since we are confident of the theoretical inputs, and reasonably confident of the drivers, the forcings, and the observables over multiple time scales, the three legs of the theory support each other.

  552. Eli Rabett Says:

    HAS misunderstands Eli’s point about models, so let us add some emphasis
    ——————————
    ““the argument is that at ALL LEVELS of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has MULTIPLE observed effects.”

    I’m sure this is a mis-speak, particularly in light of the rest of the post. Models produce predictions, the real world is observed. I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real.
    ——————————

    Eli clearly separated the models from the observations, the observations are meaningless for prediction and understanding without the models, but one ALWAYS has to be careful that there is some wrong element in any ONE model that leads it to match the observation, especially if ONLY considering a SINGLE outcome. The FACT that all levels of modeling, over 100 years, RESULTS from INCREASES in greenhouse gases AGREE on basic outcomes provides confidence in the models for understanding and prediction. Increasing model complexity principally increases the resolution of the models and brings out emerging properties of the system.

  553. Eli Rabett Says:

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)
    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

  554. Eli Rabett Says:

    Adriaan, a useful place to start is David Archer’s piece on the multiple time scales for absorption of a pulse of CO2, and oh yes, RTFRs. You could also try to get an idea from the Joos report referenced in the footnote you cite.

  555. A C Osborn Says:

    Well nobody bothered to answer my question, so I will ask it again.
    We all know that the Global Temperature Anomaly series is “Corrected”, “Celled”, “Averaged” and “Homogenised”.

    Has anyone looked at a Raw Temperature Series to see if it exhibits the same Statistical characteristics?

    [Reply: One example here. BV]

  556. Marco Says:

    @tgv: to claim that CO2 is driving the temperature increase of the last 130 years (1880 and onward, not 1860) would be wrong, and something that is not claimed by climate scientists. In fact, the IPCC argues that the increase in the early 20th century may be, at least in part, explained by increase in solar input, but this explanation goes away when looking at the temperature increase after 1970. If anything, from 1970 onward there is a *decrease* in TSI.

  557. tgv Says:

    @Marco: “In fact, the IPCC argues that the increase in the early 20th century may be, at least in part, explained by increase in solar input, but this explanation goes away when looking at the temperature increase after 1970.”

    The IPCC is confusing weather with climate.

  558. Al Tekhasski Says:

    [BV replies : Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter gives an excellent overview of the scientific methods, and how climate science stacks up against it.]

    Perhaps before expressing such a definitive opinion on the Oreskes “overview”, one should become familiar with details of the original material that she uses to make her case. In this chapter, after citing the work from “climateprediction.net” modeling effort and the picture of ensemble trajectories (Figure 4.2), she writes:

    “What does an ensemble like this show? For one thing, no matter how many times you run the model, you almost always get the same qualitative result: the earth will warm.”

    She said that the “Figure prepared by Ben Sanderson with help from the project team”. What she is not aware of here is that in actuality about 43% of trajectories were EXCLUDED from that picture under one or another excuse, because the climate trajectory would show either “unphysical cooling”, “substantial drift in control phase” or another blowup of their model.

    Click to access nature_first_results.pdf

    As we see, the data were subjectively selected by the “project team”.
    Therefore, she was using (knowingly or not) severely incomplete and biased information, and therefore all her musings should be safely dismissed.

  559. Marco Says:

    @tgv: you are confusing fact and fiction.

  560. Willis Eschenbach Says:

    JvdLaan Says:

    March 21, 2010 at 10:09
    @Willis Eschenbach
    THE Willis Eschenbach? http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php – talking about painfull.
    And in the meantime a lot from the WUWT-crowd is now coming in…quite sad giving the nice discussion that was taken place here.

    I see, you don’t have the nerve to call me a liar yourself, so you’ll do it second-hand? What is this, “It must be true, I read it on the Intawebs”?

    I was not “caught lying”. I was accused of lying, by a man whose motives and honesty are suspect.

    However, I’m used to these ad-hominem attacks by now. You can’t think of anything to counter my science, so you call me a liar. Real classy … can we get back to the science?

  561. Willis Eschenbach Says:

    Eli Rabett Says:

    March 21, 2010 at 17:41

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)

    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

    I propose just such a mechanism here. Basically as tropical temperatures increase, clouds and thunderstorms increase, driving the temperature below the starting point. This does exactly what Alex suggests above.

    Eli, just as you cannot just wave your hands and say there is an answer, you cannot wave your hands and say there is no answer …

  562. Eli Rabett Says:

    Yeah, and so did Lindzen, and people went looking for it and it was not there, although, of course, Lindzen still sees it in his dreams

  563. HAS Says:

    “HAS misunderstands Eli’s point about models”

    Yes I see now, “observed effects” was referring to what was observed in the output of the models, not that what the model produced were observations.

    It is still worthwhile making the point that if what is predicted from the model differs from reality, this is more of a problem for the modeller than for Nature.

    I note in passing that “fit to observations” when those observations have been used for model development is a very weak form of model verification. The model is just telling you what you already told it.

    I would also add that the ability of multiple models to produce the same outcome is obviously a necessary but not sufficient basis for gaining confidence in the models. In particular it is quite possible that all models incorporate a common erroneous assumption that is driving the results in question.

    End of platitudes, but they do seem to need to be said to take some of the heat out of the debate.

  564. Scott Mandia Says:

    I summarize the scientific consensus regarding “model accuracy” on the page below:

    http://www2.sunysuffolk.edu/mandias/global_warming/climate_models_accuracy.html

  565. Alan Wilkinson Says:

    Marcos: “While it certainly is possible that AGW is wrong, the B&R analysis actually negates a much broader area of physics.”

    As I see it, that is factually incorrect. None of climate science that consists of accurate collection and measurement of data is negated by B&R. None of physics that consists of individual interactions of matter and energy is negated by B&R. What may be negated are simplistic interpretations of the behaviour of complex systems incorrectly analysed. (Paul’s comments on the inappropriateness of averaged global temperatures being one such factor.)

  566. DirkH Says:

    Hi. I’m looking at this from a signal processing view.

    If GHG forcings are I(2) and temperature is I(1) than the GHG forcings level can not directly cause (granger-cause as i learned) the temperature, but the first derivative (or the first differences in a discrete time series) can.

    Some people now argue that that’s unphysical… i disagree, it’s an indicator that something is amiss in the hypothesized physical mechanism of the atmosphere. And that a refined physical explanation would have to be thought up that is in line with the statistical results.

    This physical mechanism would have to incorporate a negative feedback to match the statistical properties. I won’t offer a candidate explanation here, i’m not a physicist. I’m just saying that a negative feedback can solve the dilemma.

    I’m one of the WUWT crowd BTW so you know what to do…

  567. Alan Wilkinson Says:

    Apologies, for Marcos read Marco.

  568. HAS Says:

    Scott Mandia Says on March 21, 2010 at 21:44

    “I summarize the scientific consensus regarding ‘model accuracy’ on … ”

    I had prevously read your summary, but would suggest doing the hard yards and reading the IPCC 2007 WGI report Chapter 8 et seq.

  569. Scott Mandia Says:

    HAS,

    Much of what appears on my page IS from Chapter 8 which I link to and reference. Perhaps I misunderstand your comment?

  570. HAS Says:

    Scott Mandia

    I just think for someone coming to this anew that some of the complexity and uncertainty gets lost in the summary.

  571. Alex Heyworth Says:

    # Eli Rabett Says:

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)
    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

    Eli, my point is that when data and theory disagree there are only three possible solutions: better data, better analysis or better theory.

    Those who say “B&R must be wrong because their result doesn’t agree with the prevailing theory” are in effect refusing to look at any of these options. Who did you say was doing the hand waving?

    At least Tamino recognized this and attempted to critique their analysis methods.

  572. Scott A Mandia Says:

    HAS, that is fair enough!

  573. Willis Eschenbach Says:

    Eli Rabett Says:

    March 21, 2010 at 21:01
    Yeah, and so did Lindzen, and people went looking for it and it was not there, although, of course, Lindzen still sees it in his dreams

    I love how, whenever there is a dispute or disagreement about some piece of evidence y’all don’t like, AGW supporters declare it “debunked” or “disproved” or the like, and declare the game over.

    In fact, NASA says:

    Reconciling the Differences

    Currently, both Lindzen and Lin stand by their findings and there is ongoing debate between the two teams. At present, the Iris Hypothesis remains an intriguing hypothesis—neither proven nor disproven. The challenge facing scientists is to more closely examine the assumptions that both teams made about tropical clouds in conducting their research because therein lies the uncertainty.

    In other words, NASA says your claim is nonsense. They have a good three part article about it here.

    My point is that your bravado and certainty are misplaced. My rule of thumb?

    When the Wabbit says the discussion is over and the science is settled … he’s bunny-hopping as fast as he can away from something which is not settled at all.

  574. adriaan Says:

    # Eli Rabett Says:
    March 21, 2010 at 17:55

    Adriaan, a useful place to start is David Archer’s piece on the multiple time scales for absorption of a pulse of CO2, and oh yes, RTFRs. You could also try to get an idea from the Joos report referenced in the footnote you cite.

    As you could have seen, I have read Joos et al, and many more. But maybe you can help me to translate this into meaningful physiscs? That is why I referred to the Li paper 2009. Give me the physical explanation for the model that is underlying the exponential decay as postulated in the Bern CO2 cycle, and also why CO2 appears to have no effective half life time? Why does 21% of a pulse of CO2 remain forever in the atmosphere? What physics is this? Arrhenius? In my biological models, this would be equivalent with BS. And physics is presumed to be purer science than biology is, isn’t it?

  575. eduardo Says:

    VS said

    ‘To make a very long story short, seeing a very high temperature level in 2000, starting in 1850, is not at all ‘unlikely’ and inconsistent with a ‘random walk’. I hope you also see now why that GRL comment is nonsense.’

    With all respect, to keep this comment short, it seems that you did not read that GRL paper, and perhaps no one of the commenters here did. That GRL paper was not about ‘random walk’ or ‘unit root’ whatsoever. We tested the H0 hypothesis that the global annual mean temperature could be represented by a non-deterministic fractional-differenced process- not a random walk- and thus stationary.

    Before writing the word ‘nonsense’ shouldnt one read the original text first?

  576. Marty Says:

    I am coming in late in this discussion, but there seem to be some points worth making.

    First, unit root tests are famously low in power. Thus, the failure to reject the null hypothesis of a unit root is not quite as convincing as for the more powerful tests most applied researchers are familiar with. And co-integration fishes for relationships in the murky waters of unit root hypotheses. Let me quote here from quite a while ago:
    “It is shown analytically, using local to unity asymptotic approximations (Bobkoski (1983), Cavanagh (1985), Phillips (1987)), that whilst point estimates of cointegrating vectors remain consistent, commonly applied hypothesis tests no longer have the usual distribution when roots are near but not one. The size of the effects can be extremely large for even very small deviations from a unit root; indeed it will be shown that rejection rates can be close to one. Hypothesis tests on general restrictions on the cointegrating vector are only affected if the restriction includes coefficients on variables which do not have an exact unit root, and are unaffected asymptotically by the presence of near unit root variables not included in the restriction.” [Elliot, G., “On the Robustness of Cointegration Methods When Regressors Almost Have Unit Roots,” Econometrica, Vol 66, No. 1, p. 149]

    Second, the CO2 forcing series is convincingly I(1) over a range of tests and periods for trend plus intercept specifications, it is not so for I(2) for a intercept (or intercept plus trend, which I don’t think applies for the differences). The I(2) test should hold for a intercept specification but doesn’t at least in the post 1970 period. So I just don’t think of the CO2 forcing as I(2). I am not convinced by the rejections or the failures.

    Third, temperature series are even more problematic. I have worked with over 100 daily temperature series at US stations, and I have never found a unit root. Of course, we could have a data generating process that doesn’t show the unit root at the daily but does at the annual. Still it would nice to understand why. Also, the tests of unit roots for temperature levels are no where near as convincing as those for CO2 forcing levels. For instance, for the 1900-2008 period the Hadley, NOAA, and NASA series have rejects for a intercept plus trend specification, but the CRU doesn’t. All fail to reject for an intercept or no exogenous specifications.

    Now suppose we look at the 1979-2008 period so to also incorporate the satellite data. With the ADF we again have failures to reject for no-intercept-trend specification for the surface, but we have strong rejections for the satellite data. Interesting. So how about a constant term? Same thing except the surface data is getting closer to rejection while the satellite data is having weaker rejections. More interesting. Now with the intercept + trend specification of the ADF, all series reject the unit root hypothesis. Ahha.

    But let us not stop here. The ADF is, as has been pointed out, not the only test. Suppose we use the ERS (Elliot-Rothenberg-Stock) test with the same intercept+trend specification. With the ERS tests there is a failure reject for all series (two surface rejected at the 10% level). It seems that aggregate temperature series can’t make their minds up as to what they really are, at least statistically.

    So does an aggregate, annual (or longer period) temperature variable have a unit root? Maybe.

    Fourth, and here those readers who are inclined to dismiss Messrs. Beenstock and Reingewertz, unit roots are a knife edge criterion that is not well suited for testing against close alternative hypothesis AND series close to integrated status may work sufficiently well for a co-integration specification. For those of you who are a little foggy on co-integration the following non-technical paper (“A Drunk and Her Dog”) offers a nice description:

    Click to access Murray93DrunkAndDog.pdf

    Random walks have infinite variances; highly autocorrelated processes have very large ones. Large variances makes statistical significance, in the large sense of the term, difficult. One can dismiss Beenstock and Reingewertz because they are wading into an area of statistical uncertainty, but any of us who are making inferences about temperature trends should realize we are all wading in the same waters. Macroeconomics have been wading in those waters since the early 1970s when Box and Jenkins (statisticians, not economists) told the profession to wake up: the time series properties are as important as the structure. It was a message that was much resisted, but guys like Granger and Engle and a whole bunch others gave enough guidance and the profession — well at least the econometrics — changed for the better. Maybe — and I truly don’t know — Messrs. Beenstock and Reingewertz and others of that time series inclination are simply doing for climatology what Box and Jenkins did for macroeconomics.

  577. adriaan Says:

    @Eduardo,

    Ref to the GRL paper please, GRL has been mentioned so often.

    [Reply: How unusual is the recent series of warm years? BV]

  578. eduardo Says:

    VS wrote
    ‘There is more. Take a look at Beenstock and Reingewertz (2009). They apply proper econometric techniques (as opposed to e.g. Kaufmann, who performs mathematically/statistically incorrect analyses) for the analysis of such series together with greenhouse forcings, solar irradiance and the like (i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent. They, therefore have to be related via more general methods such as polynomial cointegration).’

    I would like to pose this question for my understanding. Assume we have a discreet process C which is found to be I(2), and is the result of discreet sampling of a continuous function c(tau). Now consider t(tau) the time derivative of c, and its discreet sampling T. In other words, T would be the time-difference of C. Would T be I(1) ? I would say so.
    You would conclude. according to the above paragraph, that C and T are independent, and yet they are physically bound to each other, being just the samplings of c and t.

    Am I wrong? Thank you

  579. eduardo Says:

    @ adriaan

    http://www.agu.org/pubs/crossref/2008/2008GL036228.shtml

  580. Shub Niggurath Says:

    Scott:

    From above

    “…about $80 billion per year in subsidies to fossile fuel companies (far greater than geen subsidies, BTW), and then thet cost of climate change as a result of this carbon is not factored in, etc.”

    If the only rationale of mass deployment of modes of energy production is to avert climate change then the framework for our analysis is

    1) The newer modes (solar, wind) should be as energy ‘efficient’ as the previous one
    2) Being so, they should not contribute to CO2

    Having agreed to this (I am sure you will) -if one examines the various parameters that go to contribute to an energy-efficient source – it is clear that the alternative sources fail on several of those parameters, especially in energy density and control of output rates.

    Therefore the so-called monstrous error therefore lies not in the subsidies themselves, but in the fact that one has to subsidize these inefficient modes of production.

    Subsidies for fossil fuels are meaningless in the context of our present discussion because they are subsumed in the cost of achieving presnt-day human productivity – and therefore is our baseline.

    Back to the unit-root. :)

    Regards

  581. Willis Eschenbach Says:

    eduardo Says:
    March 22, 2010 at 00:51
    @ adriaan

    http://www.agu.org/pubs/crossref/2008/2008GL036228.shtml

    Eduardo, would I be correct in saying that the conclusion from your paper is that the temperature series is not stationary?

    And given that (as far as we know) global temperatures have been rising since the Little Ice Age, would I be correct in saying that this is not a surprising finding?

    Thanks,

    w.

    [Reply: The LIA ended when solar activity picked up again. It has long since stabilized (ie no trend in solar indexes since the 1950’s. The warming since then has nothing to do with the LIA having ended almost 100 years earlier. See also here. BV]

  582. adriaan Says:

    @Eduardo,

    Thanks for the link, I downloaded the paper and the additional info. I will be back on it.

  583. adriaan Says:

    @Eduardo,

    It looks as if the conclusion of your paper is only vaild for averaged data. On a single station basis, your study shows that observed warming lies not outside the natuaral variation. Does this mean that warming is dependent upon averaging? I have my doubts about the use of averaging and gridding of temperature data. As for the memory effect, have a look at the Bern CO2 model. (which you are familiar with) It implicitly implies a memory effect, without which the buildup of antrogenic CO2 would not be possible.

    [Reply: It shouldn’t come as any surprise that the variability in a single location is much larger than that in the global average. The consequence is that statistical significance is not reached as quick for single locations (or short timescales). BV]

  584. Alan Wilkinson Says:

    Marty, for my clarification there was considerable earlier discussion about the necessary length of data for useful application of unit root tests. VS pointed out the inadequacy of short runs such as you describe (eg since 1979).

    What makes you think TSA of such data subsets has any power?

  585. eduardo Says:

    @ Willis,

    Dear Willis,

    the conclusion is that the recent clustering of record annual temperatures is very unlikely in a long-term-persistence process (fractional differencing process), and therefore points to either a non-stationary or deterministic trend in the period analyzed.

    This statistical analysis cannot discriminate between causes, as only temperature data were analyzed , not forcing data. You say, however, that temperatures have been increasing since the LIA. That’s true, the question is to identify and quantify the driver. It is not sufficient, I think, to just say ‘they have been increasing’.

    I think that, to judge a theory (CO2 or sun), one should require the same level of accuracy and/or explanatory power.

  586. eduardo Says:

    @ adriaan,

    The outcome of a test depends on the signal-to-noise ratio present in the data. In averaged data much of the local random variations are filtered out, so it may well be that the the level of significance is lower for local than for averaged data, although the signal strength may be the same in both.

    Memory. A test cannot demonstrate a hypothesis, it can only reject one. In this case, that the clustering or records years could be due just to a long-term persistence model.
    So temperature may have a memory effect, quite likely.

  587. Willis Eschenbach Says:

    eduardo Says:

    March 22, 2010 at 01:57
    @ Willis,

    Dear Willis,

    the conclusion is that the recent clustering of record annual temperatures is very unlikely in a long-term-persistence process (fractional differencing process), and therefore points to either a non-stationary or deterministic trend in the period analyzed.

    This statistical analysis cannot discriminate between causes, as only temperature data were analyzed , not forcing data. You say, however, that temperatures have been increasing since the LIA. That’s true, the question is to identify and quantify the driver. It is not sufficient, I think, to just say ‘they have been increasing’.

    I think that, to judge a theory (CO2 or sun), one should require the same level of accuracy and/or explanatory power.

    Thanks, eduardo. I agree that there is a trend in the data (non-stationary or deterministic, as you point out).

    However, since the temperatures have been increasing since the LIA, it seems unlikely that CO2 is the driver … which does not, of course mean that the sun is the driver either.

    It simply means that we don’t know what the driver is.

    [Reply: non sequitur. BV]

  588. VS Says:

    Hi eduardo,

    Do you, by any chance, happen to be Eduardo Zorita, one of the authors?

    I stated up there that I have’t read the paper carefully, my apologies if I missed something crucial (in my defense, at that point, I didn’t think this debate would evolve this far).

    However, I did notice a stationariy process is assumed (i.e. no unit root).

    What’s the basis for concluding that in light of these test results and these auxilarry test results, as well as all the literature pointing to the presence of a unit root?

    I’m interested in your motivation. Now, I also took a closer look at your paper. I cite from page 2:

    “For the process to be stationary d must lie between 0 and 0.5.”

    Then you write:

    “The Whittle method gives values slightly larger than 0.5, even disregarding the period from 1960 onwards, for all three global records.”

    Your calculations imply nonstationarity, and are completely in line with my results posted above. But then you guys go on, and do this:

    “Considering these possible uncertainties, it will be assumed here that d is smaller, but very close, to 0.5”

    And simply assume a I(0) process with a high persistence. I hope you can understand my initial reaction, although I do apologize for the tone (it was a ‘different’ debate up there :)

    So, allow me to summarize:

    – The Whittle method (i.e. your calculations) implies non-stationarity
    – The literature widely reports the presence of a unit root (i.e. non-stationarity)
    – Most test results imply (see links above) imply the presence of a unit root (i.e. non-stationarity)

    Why do you then assume stationarity (i.e. I(0) with high persistence) in your analysis? I’m very interested in your motivation in light of the above.

    Finally, wouldn’t you agree (analytically) that the presence of a unit root in the GISS series in fact invalidates the conclusions of Zorita, Stocker and von Storch (2008)?

    ***

    As for your second post. I have to point out that I was being sloppy there and made a ‘typo’ (just noticed). Polynomial cointegration, see also Engle and Yoo (1991), is a method for relating I(1) and I(2) series. The difference of order of integration implies that the series cannot be integrated linearly. An attempt to cointegrate them polynomially is then tested and rejected by BR.

    So, perhaps I misinterpreted your question (I think we are using different ‘jargon’ :), but what I am trying to say here is that the different orders of cointegration don’t make series completely independent (simply linearly independent). What has been tested by BR, is that ‘general’ dependence (i.e. via polynomial cointegration).

    I’m very interested in your input, especially since your findings are so widely cited (and used as an argument in a lot of on and offline ‘discussions’ :).

    PS. In light to your answer to adriaan, I do have to point out that the KPSS test in fact rejects stationarity. See linked test results.

    ————–

    Hi Marty,

    Again, see the links in the answer to eduardo for test results. On the basis of these test results, with this series (not completely differently structured, daily, series), we conclude that the series contain a unit root.

    Ergo, this series is non-stationary and a unit-root based approach is justified.

    You mention the low power of the ADF test (i.e. it has a low probability of rejecting a false H0, i.e. the chance of a type II error is high). However, we also tested the unit root hypothesis via the KPSS test (see test results) which takes stationarity (i.e. the absense of a unit root) as the H0. Here we reject stationarity (at 5% and 10% sig).

    What’s your opinion on that?

    PS. Note that daily/monthly data need to be properly corrected for seasonality (using TSA methods). This greatly complicates the analysis.

    ————–

    Hi Willis Eichenbach,

    At WUWT you asked me about the Hurst coefficient. I snooped around quickly (by no means a conclusive query) and my first inference is that the calculations of the Hurst coefficient in fact assume stationarity.

    Can somebody please correct me if I’m wrong here (good chance of that being the case)?

    ————–

    NB1. Thanks everybody for the input, I like where this is going. I still owe some people a reply, especially Paul_K (actually, in that particular case, I actually have a couple of question. very interesting post! :). I’ll try to get to them soon.

    NB2. David Stockwell, welcome! Could you please send an (empty or not :) email to vs dot metrics at googlemail dot com? Thanks!

  589. VS Says:

    typos:

    “The difference of order of integration implies that the series cannot be integrated linearly.”

    cointegrated

    “but what I am trying to say here is that the different orders of cointegration don’t make series completely independent”

    integration (without co :)

    doh, it’s late

  590. John Whitman Says:

    ”””’HAS Says: March 21, 2010 at 11:18 – But not because the citric doesn’t understand that rejecting one particular complex model doesn’t mean the end of the world. It happens every day in real science as we stagger on in this grand endeavour to understand the physical world.””””

    HAS,

    Well put, ‘this grand endeavor to understand the physical world’. Thanks BART for supplying a good venue at your blog. Keep at it.

    In the Western civilization ‘this grand endeavor to understand the physical world’ started its focus in the ancient Greek era and (with fits and starts) has continued to this very blog thread.

    VG, thanks for introducing the Augmented Dickey-Fuller tests and unit roots into the ‘grand endeavor’ on this blog threat. You put us on a steep understanding curve.

    John

  591. John Whitman Says:

    Apologies, spell check error in my ‘John Whitman Says: March 22, 2010 at 04:25 ‘ comment.

    The word ‘thread’ meant, not ‘threat’.

    Strange mistake, I agree. Sorry.

    John

  592. HAS Says:

    Re BV’s reply to my comment at March 21, 2010 at 11:18

    “Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter (http://www.lpl.arizona.edu/resources/globalwarming/documents/oreskes-chapter-4.pdf ) gives an excellent overview of the scientific methods, and how climate science stacks up against it. Slideshow is here (http://www.lpl.arizona.edu/resources/globalwarming/documents/oreskes-on-science-consenus.pdf start at slide nr 30 to jump to the philosophy of science part).”

    BV what you have linked to is advocacy for the “scientific consensus on climate change” not for the scientific method. These are different things, and (how do I put it gently) the point of my comment was that to have a productive debate we need to focus on the latter rather than the former (for or against from either side).

  593. Tim Curtin Says:

    A C Osborn said on Global average temperature increase GISS HadCRU and NCDC compared

    Has anyone looked at a Raw Temperature Series to see if it exhibits the same Statistical characteristics?
    A C Osborn Says:

    and again on March 20, 2010 at 15:59
    Has anyone tried the [unit root tests] on an unadulterated Temperature series form one thermometer?
    [Reply: “One thermometer” cannot measure global avg temp. Keep baseless accusations (adulterated; massaged to death) at the door before entering. Thanks! BV]
    But AC Osborn raises an interesting issue. Why could not the IPCC offer in its AR5 the climate statistics from each one of all (c1200) stations in current GISS, HadleyCRUT, that have at least 50 years of unbroken records to date, classified by their respective max & min temperature and rainfall etc for each of the last 50 years, with the SSR and [CO2] at each. Let us do the trending, averaging, unit rooting, and gridding.

  594. Alex Heyworth Says:

    Any time I see someone mentioning the scientific method from now on, I am going to point them in the direction of an excellent book I have recently read: Henry Bauer’s “Scientific Literacy and the Myth of the Scientific Method”.

    A good quote from an Amazon review:

    A key point stressed by Prof. Bauer in different contexts is that the power of science is that it is agreed on by consensus, but that does not always mean that the consensus is right, again because humans are fallible, and because data is *always* interpreted according to a theory or some other bias. The author, as have many other philosophers of science, refutes the common belief that in science knowledge is gained exclusively by strict Baconian impartial induction. Examples are cited where scientists could not accept data obtained wholly by scientific methods because it didn’t fit their prejudices.

    The chapter called “The So-Called Scientific Method” is the best I’ve read on why the empirical scientific method, while a wonderful ideal to strive for, is nevertheless a myth. Prof. Bauer makes many important points, such as that some sciences (physics) are theory-driven, while other sciences are observation-driven (geology); some sciences can make precise theories through specific experiments (physics and chemistry), while other sciences (cosmology and paleoanthropology) cannot run experiments and are thus very “data deficient.”…

    Another chapter that is also outstanding is the following chapter, “How Science Really Works.” Prof. Bauer uses as the main theme the excellent analogy devised by Michael Polyani of scientific problem solving as a puzzle of different teams communicating with each other, getting at the truth, piece by piece, separately but in tandem nevertheless. Another theme that is very helpful in this chapter is the author’s cogent distinction between textbook science and frontier science. Textbook science is almost always reliable because it has passed the test of time through repeated verification. On the other hand, frontier science, which is unfortunately what is usually reported in the news precisely because it is “new” and exciting, often turns out to be dead wrong. The chapter also discusses those levels of science between these two “extremes.” After reading this chapter I feel that I now have a much clearer way to assess the truth of whatever science I might be reading about.

    An excellent read, I’d go so far as to say that if all followers of climate blogs read it, it could reduce the level of misunderstandings and falsehoods by perhaps three quarters. For a start, it would get rid of most of the rubbish about science not being about consensus and the Popperian falsification stuff.

    [Reply: Sounds interesting indeed. I wrote about the relevance of consensus here. BV]

  595. ScP Says:

    Tim, A C Osborn, wander over to the Chiefio, he loves that kind of thing.

    http://chiefio.wordpress.com/

  596. HAS Says:

    Alex Heyworth, Bauer’s a truely interesting character!

    http://thetruthbarrier.com/essays/46-john-strausbaugh/168-science-and-scientism

  597. Anonymous Says:

    @ VS

    Dear VS,

    yes, I am one of the authors. I am really not aware that the GRL papers has been or is being discussed. Let me summarize the paper by first saying what it doesnt say. It doesnt say whether the global mean temperature is an integrated process or not. This aspect is interesting in itself, also from the physical point of view, but the objective of the paper was different. I also think that the design of the tests on unit root deviate from the original discussion (see last paragraph).

    I see the point of discussion as follows: some people claim that CO2 is causing warming, as seen in the temperature trends; other people claim that the trends are of stochastic nature caused by either a unit-root process or by a fractional-difference process. What should be tested in my opinion is: (1) can the global mean temperature be a unit-root or fractional differencing process *in the absence of anthropogenic forcing* ? and (2) assuming the global temperature shows stochastic trends in the absence of anthropogenic forcing, what is the likelihood of observing the 20th temperature trends (or the clustering of record years for that matter). In other words, if natural variations can be described by a unit-root or fractional differencing, could these natural variations give rise to the observed trends ?
    The GRL paper simply explored part of the second question: if the natural variations of the global mean temperature is a fractional difference processes what is the likelihood of observing the recent clustering of record years ? This probability turns to be very small.

    To the question of whether the observed global mean temperature is a unit-root process: it may be, some tests indicate it is, some tests indicate it isnt. If I turn the heating in my room continuously higher and higher, the temperature will be a unit-root processes. Does this demonstrate that the heating has no influence on temperature ? Obviously not. No, actually it doesnt demonstrate anything interesting, and it bears to relevance for the testing of observed temperature trends. It would be interesting *if* the temperature would contain a unit-root *in the absence of heating*. This reasoning also applies to the observed global annual temperature: a unit-root process is relevant for the significance of trends *if* the global temperature is a unit-root process under the null-hypothesis, namely that CO2 is not driving the temperature. So to be informative all the tests for a unit-root should be conducted either for periods where the anthropogenic influence was not present, or in control simulations with climate models

  598. VS Says:

    Hi Eduardo,

    Thank you for your reply! I do have a couple of comments / questions.

    —————-

    First you write:

    “Let me summarize the paper by first saying what it doesnt say. It doesnt say whether the global mean temperature is an integrated process or not.”

    I disagree here. You assume temperatures not to be integrated of the first order (but rather fractionally integrated). I think that also amounts to ‘saying’ it.

    —————-

    Furthermore, that assumption is contradicted by the test results (and more test results) for the series in question. Also, your own calculations indicate that the series is non-stationary (which is in line with those test results). I cite, again, from your paper:

    “For the process to be stationary d must lie between 0 and 0.5.” and “The Whittle method gives values slightly larger than 0.5, even disregarding the period from 1960 onwards, for all three global records.”

    In other words, what you have stated in your reply is your opinion on the structure of the temperature series which is clearly contradicted by the observations (i.e. contradicted by analytical facts: i.e. both testing and your own calculations).

    Isn’t formal testing exactly that which turns opinion into science?

    Now, from your post I infer that you have the same opinion on unit roots in temperature series as Kaufmann et al (2006). May I remind you however that while Kaufmann indeed holds the same idea on the ‘essential’, stationary, nature of temperature series, he does respect the test results and in all of his analyses he treats the global average temperature series as an I(1) process.

    —————-

    “So to be informative all the tests for a unit-root should be conducted either for periods where the anthropogenic influence was not present, or in control simulations with climate models”

    Would you mind sharing the formal method (e.g. derivation, simulation) used to arrive at this conclusion?

    I find it very awkward also that you start by assuming that anthropogenic forcings are warming the planet, therefore we cannot test the global mean temperature series for unit roots. At the same time, the first step we need to take in order to (empirically) assess whether anthropogenic forcings are indeed warmig the planet, is test the global mean temperature series for unit roots.

    ??

    —————-

    Finally, you wrote:

    “The GRL paper simply explored part of the second question: if the natural variations of the global mean temperature is a fractional difference processes what is the likelihood of observing the recent clustering of record years ? This probability turns to be very small.” (bold added)

    This implicitly answers the question I posed here, namely:

    “Finally, wouldn’t you agree (analytically) that the presence of a unit root in the GISS series in fact invalidates the conclusions of Zorita, Stocker and von Storch (2008)?”

    So, assuming the series contains a unit root, the conlusions on the ‘probability’ of modern warming arrived at by Zorita, Stocker and von Storch (2008), are invalidated.

    Would you be so kind to also confirm this explicitly?

  599. Anonymous Says:

    Dear VS,

    I thing you are not understanding correctly the logic of GRL paper. We were *not* assuming that the global annual temperature was not a unit root process. We were exploring the consequences of the* natural* variations of the global mean temperature being a fractional-difference process. The fact that the observed record may show a unit root does not invalidate the paper. The conclusions of the paper would be invalidated if the *natural* variations of the temperature (i.e. without anthropogenic contribution) were shown to be able to cause a unit-root process.

    VS said
    ‘I find it very awkward also that you start by assuming that anthropogenic forcings are warming the planet, therefore we cannot test the global mean temperature series for unit roots. At the same time, the first step we need to take in order to (empirically) assess whether anthropogenic forcings are indeed warmig the planet, is test the global mean temperature series for unit roots.’

    I do not agree with your logic. The logic is not to assume that the CO2 is warming the planet. The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’ .Then you assume some models for the structure of those natural variations: white noise, red noise, fractional -differencing and unit root, or natural variations based in control simulations with climate models. etc . Then you try to rule out these null-hypotheses one by one. The GRL paper was focused on two of them, red noise and fractional differencing.
    Once one of these hypothesis cannot be ruled out, the story is not finished. One needs to explain what physical mechanism can cause it. If you find that a unit-root process can describe the observed temperature trend, ok, perfect. Now you need to suggest what *natural* mechanisms can cause a unit-root processes. To say the trend occurs because temperature is a unit-root process does not say anything by itself. You could as well say that it is caused because Jupiter wishes it to happen.

    This is the eternal scientific logic. A hypothesis or theory can never be proven, it can only be falsified. The CO2 influence on temperature will never be proven in the logical sense. We can only falsify all other hypothesis known to us. I think this is nothing new.

  600. Alex Heyworth Says:

    HAS, thanks for the link to the Henry Bauer interview. I knew he had a broad background, but did not realize that he was quite so eclectic. What a character indeed! His book that I recommended gives little clue to any of this.

  601. MartinM Says:

    You assume temperatures not to be integrated of the first order (but rather fractionally integrated). I think that also amounts to ’saying’ it.

    Seriously? You don’t see the difference between assuming a particular model for the purposes of testing it, and asserting that all other models are incorrect?

    So, assuming the series contains a unit root, the conlusions on the ‘probability’ of modern warming arrived at by Zorita, Stocker and von Storch (2008), are invalidated.

    Since the probability arrived at by Zorita et al is explicitly conditioned on a particular class of model, of course it’s not invalidated by the existence of other models.

  602. VS Says:

    Hi Eduardo

    (1) the probability of seeing the ‘record years’ is conditional on the stationarity (and fractional integration ‘order’, also assumed) of the giss series.

    (2) test results, and your own calculations, and the literature, point to non-stationarity

    (3) ergo, your probability equations are incorrect

    VS

  603. MartinM Says:

    No. P(A|B) doesn’t change just because B happens to be false. This is a remarkably simple premise, and if you don’t understand it, you really have no place discussing anything at all involving statistics or the scientific method. Zorita et al tested two particular models and found them wanting. Pointing out that those models are incorrect quite obviously doesn’t contradict their results; that’s precisely what they found.

  604. VS Says:

    Hi MartinM,

    I’m fine with the academic exploration of the probability: P(Y=y_observed|Y=stationary)

    I’m simply pointing out that our observations tell us that Y is in fact not stationary.

  605. MartinM Says:

    Yes, but that has absolutely nothing to do with Zorita et al. Are you now withdrawing your claim that their conclusions are invalid?

  606. VS Says:

    Ah, I see: “(3) ergo, your probability equations are incorrect”

    Oops, that’s a different discussion :)

    Make ‘incorrect’: ‘of no empirical relevance’.

    Sorry, my bad.

  607. Marty Says:

    Alan Wilkinson, Re: length of series. Yes, the sample size makes a difference, a big difference. From my memory let me offer a quick example: for an alternative hypothesis rho o something like 0.95 the power of the ADF (or maybe Elliot Rothenberg Stock test) at 5% significance was around 10% for N=25 and over 80% for big N, probably over 250. So yes, I don’t think that a sample of 29 has much power (although I am a semi-believer in the nature of “highly insignificant” test results, at least where they exist remembering the particular power is defined in conjunction with the size of the test). I was just offering the relative rejections or not over different periods, data sets, and specifications (of something that should be roughly the same in all) as an example of the problematic nature of dealing with unit roots. They are like the stock market: you know you should use it because if offers the best (long-term) returns, but that doesn’t mean you are always confident with what you get.

    VS, (first a thank you put the unit root and co-integrated series fat, so-to-speak, into the climate fire, even though I suspect I am much more agnostic of the significance — by that “significance” I do not mean statistical but generic as in importance such as passing the interoccular test: when the result is really important it hits you between the eyes, which, of course and alas, does not always happen — of any particular co-integration result. This particular “fat” was needed and we will just have to see what comes.)

    Regarding the KPSS test, I don’t know enough about it (haven’t read the article or anything more than in a manual) to argue its relative merits. I do worry about the spectral estimation and the bandwidth because these make a difference (look at the Hadley data with a Andrews bandwidth) but figuring out what is going on there is out of my league these days (I’m a old guy). I also don’t know how non-stationary but non-unit-root conditions factor into the critical region, say heteroskedasticity. But just for heck I tried the test against an AR1 process, covariance stationary and randomly growing exogenous driver (but a simple trend), and the test rejected almost every time. Presumably I violated the maintained hypothesis, but that’s the problem: so does the real world, be in economics or climate. (I am not dismissing the KPSS test, only noting that it may not be perfectly applicable to the issue here.)

    Regarding your more general belief in the unit roots of the temperature series, I am not saying you are wrong. What I am trying to do is tone down your certainty, and let me note that you seem to me doing a some that yourself. You first comments here — and I confess I have not followed the thread all the way — conveyed the tone of certainty of, say, a Paul Samuelson in 1968 when he said that we have fiscal and monetary tools to fight unemployment and inflation simultaneously. And of course, then came the 70s. I am not trying, like some unfavorable referee a nasty editor pulled out to axe your article, to reject your results or views. Rather I am saying that they are interesting (in the academic sense) even when there is a degree of skepticism about the maintained hypothesis. In the words of Edward Leamer (taken by Peter Kennedy), I suspect a little “sinning in the basement” but like both I am not terribly bothered by it. I just don’t want preaching from the econometric temple to be confused with the actual practice.

    We who believe in statistics should be careful. Since most of our statistics is based on measurable spaces, we may get preached to by the physicists like Georg Cantor was by Leopold Kronecker: “Die ganze Zahl schuf der liebe Gott, alles Ubrige ist Menschenwerk.”

    God created the natural numbers, all else is the work of man.

    And as a physicist once told me, there are less than 10^100 elementary particles, so maybe old Leopold was right and there goes multivariate statistics. :-)

  608. A C Osborn Says:

    A C Osborn Says:
    March 21, 2010 at 17:59

    BV your reply has absolutely nothing to do with first or second versions of my question. I am not trying to have one temperature series represent the globe.

    My point is that the Global Temperature series is not NATURAL.
    It is Adjusted, Gridded, Averaged by Grid and then averaged overall.
    By which time it bears no resemblance to a Natural Raw Temperature.

    So my question is, Does a Natural raw Temperature Series have the same Statistical characteristics as the Global temperature series?

  609. Josh Says:

    VS, you inspired me to do a cartoon of you

    http://www.cartoonsbyjosh.com

    I am probably wildly off the mark and I hope I don’t offend anyone [except Tamino ;-) ]

    Do drop me an email – I would very much like to get in touch.

  610. Kweenie Says:

    “Does a Global Temperature Exist?”

    Click to access GlobTemp.JNET.pdf

  611. Marco Says:

    @Kweenie:
    Ask McKittrick how he handled missing values for the two averaging methods. Then ask him why he used two different methods for handling missing values. Don’t be surprised if he repeats what he told Tim Lambert: “oh, that makes it four different methods” !

  612. MartinM Says:

    “Does a Global Temperature Exist?”

    How the hell did that get published?

  613. AndreasW Says:

    I totaly agree with Bishop Hill:

    Statistics wasn’t supposed to be this much fun!

    VS

    Really interesting debate you started.

    The IPCC logical tactics has been:

    We can’t prove CO2 is warming the planet but we can’t prove any other factor is warming the planet either so it must be CO2 at a likelyhood of 90%.
    Anonymous says he can’t prove a hypothesis, only disprove it. So he chooses to disprove some hypothesis about “natural warming” and find them not true. Fine. But the flipside to that coin is that you can test the hypothesis about CO2 which i think is the point made by VS. If the test says the hypothesis about CO2 is false it’s game over. You don’t need to understand why the planet is warming in order to disprove CO2.

    [Reply: AGW is falsified by the presence of a unit root. BV]

  614. michel Says:

    How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2?
    Logical fallacy again. If CO2 is not causal, you cannot explain it with CO2. So you are begging the question you set out to prove.

    What you’re really saying is that we have a rise in CO2, and we have a rise in temp, and we have some explanation about how despite the one following the other, the relation was causal. But this is not an argument for CO2 having the effect you need. If you use it like that, you’re going in a circle. What you need is some independent evidence, other than the previous warming you are trying to account for, that CO2 really can do that. And for that, the time lags are a real problem.

    [Reply: I guess chicken don’t get out of eggs in your world? BV]

  615. MartinM Says:

    But the flipside to that coin is that you can test the hypothesis about CO2 which i think is the point made by VS. If the test says the hypothesis about CO2 is false it’s game over.

    Not only do the tests VS mentions not falsify AGW, they couldn’t possibly do so. They’re just not suited to the task.

  616. MartinM Says:

    Logical fallacy again. If CO2 is not causal, you cannot explain it with CO2. So you are begging the question you set out to prove.

    …what? You appear to be suggesting that assuming a particular model to see if its output matches observation is logically fallacious. Well, there goes science, then. Fantastic.

  617. claw Says:

    Holy smokes! I finally finished the thread (for now). Very interesting reading and I never thought I’d say that about statistics. Great questions asked and answered. I think whbabcock has a very good summary of the thread even if it is in the middle.

    Sure would be nice to see this kind of (usually) civil debate at any (both pro and con) other blogs. Thanks for hosting Bart

  618. GDY Says:

    Bart, VS and others – thank you for the constructive, educational dialogue. As a relative newcomer (and somewhat well-educated layperson) to the topic, I am trying to listen to the arguments of substance from both the ‘pro’ and ‘sceptic’ AGW camps. I applaud those of you who have attempted to further our understanding of the world we live in.

    VS – when would the Stat Tests show a trend in a meaningful ‘simulation’ of a future non-stationary climate process with some ‘trend’ added in? Is that possible to do? So generate some autoregressive non stationary sequence, then adjust all numbers up by some amount (0.1 degree, 0.2 degree, etc). How long would it be before the tests rejected no trend for any particular trend introduced (time/magnitude relationship?)?

    Bart – do you have any reservations about immediate public policy responses given the limited number of data points we have for the proposed/strongly suspected GHG forcing trend (including dramatic restructuring of our means of production and all the potential unintended consequences that could unleash)? And further, can we safely conclude our understanding of the nature of the relationship between CO2 and temperature? Isn’t it possibly non-linear? Or given what VS is saying with regards to the actual temperature record, possible still no relationship?
    I do also find naturally intellectually reasonable the idea of the “global energy budget”, as well as the specific evidence of more energy coming in then going out. Do we have reliable historical data on deep ocean temps or is this something for which we have good data only post ARGO network? Have we solved the Global Heat Anomaly (ie, the Trenberth/Pielke Sr vein…)?

    Thanks again, I apologize if I should have already known these things from other sources!

    [Reply:There is a lot we know (esp the general picture) and there’s a lot we don’t know exactly (esp specific details) about climate change. But we’ll have to decide on a course of action, whether that’s BAU or some emission reduction path. Given what we know, I think it’s prudent to reduce our emissions. Based on what the science sais, the risks are real, and the time lags both in the energy system and the climate system add to that risk. (just as stopping with smoking when you’re driven into the IC is a bit on the late side; the effects are cumulative.) See also this post. BV]

  619. Kweenie Says:

    Marco and MartinM’s reactions are more predictable than climate. Actually I was wondering what the “other team” has to say about this paper.

  620. Marco Says:

    @Kweenie:
    Seriously, ask McKittrick what he has done. Then make up your own mind.

  621. AndreasW Says:

    MartinM

    I thought the tests showed the correlation between CO2 and temperature is spurious. You say the tests “couldn’t possibly do so”. How do you mean? Is VS using the wrong tests or is CO2 the “holy variable” that never could be tested against spurious correlation or is the proper test not invented yet?

  622. David S Says:

    I’m shocked at some of the logic here:
    “The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’”
    and then, if we cannot find a “natural” explanation, to assume that we have ruled out all possible alternatives and therefore it must be CO2. This is exactly the same reason that the ancients used to “prove” the existence of God and miracles. If we cannot find a natural explanation, it must be…..

  623. eduardo Says:

    @ David,

    well, you may be shocked, but this is the scientific method.
    Actually, I usually try to see the merits in all arguments from the so called skeptics, but this time I am really disappointed. It seems to me, and I hope I am wrong, that logical basis of modern science is not understood by some here.

    I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. Theories or hypothesis can only be disproved when its predictions contradict the observations. So the way science work is by disproving competing hypothesis until one or none(!), remains. A theory is challenged continuously, and the CO2 theory (or the solar theory,or any other) should be also continuously challenged . The example presented by VS about CO2 being I(2) and temperature being I(1) could be a logical startling point (I do not dispute that), but the technical details are not clear, at least not accepted by all, as we can see in this discussion thread.

    What I see as a tautology is to say ‘ temperature is a an integrated process and therefore the observed trend is ‘normal’. By the same token I could say: temperatures trends are caused by Jupiter, so that the observed trend is ‘normal’. In other words, to be an integrated process is no explanation from first principles at all, is just a phenomenological, perhaps interesting, description, but nothing more.

  624. eduardo Says:

    One interesting thing that VS could do is to apply the I(2)-I(1) tests to the temperature simulated by the IPCC models for the 20th century, and see if the results of the those tests are really different than for the observed temperature. I would be happy to provide the global temperature means from the models, around 20 of them.

  625. David Stockwell Says:

    eduardo: “temperature is a an integrated process and therefore the observed trend is ‘normal’” I agree with you about the stats, but this is not the argument, at least of B&R. I see it as a system identification exercise, where the I(n) status hints at a system where temperature is related to change in CO2 more than level of CO2. IMHO nothing more, and the physics proceeds from that point.

    Further to testing the B&R idea I ran some independent regressions here http://landshape.org/enm/testing-beenstock/. One of the results was:

    TEMP ~ -0.49(***)+0.06*OO(***) + 0.72*GHG() -11.1*dGHG(***) + 4.0*V() -0.09*SS() R-squared: 0.8709

    (***) means highly significant and () not significant

    In this case the delta GHG (or change in GHG) was a much better explanation of temperature than the level of GHG. In other words, unless I screwed up somewhere, it is independently confirming B&R without recourse to the statistics of unit roots.

  626. Alan Wilkinson Says:

    I am not sure why we should be so surprised if temperature is I(1). Assuming surface temperature is a proxy for energy, then assuming that the net energy flow into the globe’s surface for a year is relatively random (eg moderated by chance configurations of clouds and convection patterns, for example) then that is exactly what we would expect.

    Now BV will say any increased surface temperature should lead to increased emissions increasing the likelihood of cooling but that can be negated or at least reduced by various feedbacks, including increased water vapour and CO2 emitted from oceans, melting ice, etc. So curiously this turns the AGW argument somewhat on its head.

  627. Shub Niggurath Says:

    Eduardo:
    “I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. ”

    The same statement, if made by a global warming skeptic, will never be accepted.

    “Theories or hypothesis can only be disproved when its predictions contradict the observations.”

    A theory/hypothesis is a complex sum of its many parts. If contradictions are observed in its parts – it may mean that the theory may not be a good fit on the whole.

    “So the way science work is by disproving competing hypothesis…”

    So you are saying essentially that, in order to disprove a theory, one should come up with competing hypotheses which should then be proven? Is that correct?

    Can science work by disproving existing hypotheses?

    Regards

  628. Willis Eschenbach Says:

    David S said:
    March 22, 2010 at 22:35

    I’m shocked at some of the logic here:
    “The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’”
    and then, if we cannot find a “natural” explanation, to assume that we have ruled out all possible alternatives and therefore it must be CO2.

    eduardo replied:

    March 23, 2010 at 00:08

    @ David,

    well, you may be shocked, but this is the scientific method.
    Actually, I usually try to see the merits in all arguments from the so called skeptics, but this time I am really disappointed. It seems to me, and I hope I am wrong, that logical basis of modern science is not understood by some here.

    I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. Theories or hypothesis can only be disproved when its predictions contradict the observations. So the way science work is by disproving competing hypothesis until one or none(!), remains.

    Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities?????

    The scientific method is to say:

    We can’t explain it with our current models, therefore either:

    a) our models are not as complete as we think, or

    b) it’s CO2, or

    c) it is natural variation from an unknown forcing (cosmic rays, sulfur compounds from plankton, changes in the combination heliomagnetic/geomagnetic field, whatever), or

    d) the earth has a thermostat as required by the Constructal Law, or

    e) it’s some unknown factor, or

    f) some combination of the above.

    Concluding that the cause of temperature rise is CO2 simply because we can’t explain it is not scientific in the slightest. Neither is believing in CO2 because it cannot be falsified. As David said, it’s like the ancients saying

    We can’t understand lightning, so it must be Thor’s hammer striking fire, and you’ve never been able to falsify my Thor theory so we should believe it.

    Or as Shakespeare said:

    There are more causes of temperature changes in heaven and earth, Eduardo,

    Than are dreamt of in your philosophy.

    There is another, more subtle difficulty with your “last thesis standing”, which is that the CO2 hypothesis makes no testable predictions, so it cannot be falsified. What would it take for you, Eduardo, to give up your claim that CO2 inexorably must cause rising temperatures? Fifteen years without rising temperatures? We’ve already seen that …

    I look forward to your answer.

  629. GDY Says:

    Silly question – is it even appropriate to use surface temperatures as a proxy for global temperature? you know, given the ocean’s are 70% of the earth’s surface and contain 1.37
    billion km^3 of water and all. shouldn’t we develop a global temperature index INCLUDING OCEAN TEMPS and then run the unit root tests and ONLY THEN perform the appropriate statistical tests for significance of relationship between CO2 and global temperature? (if we have such an index, i haven’t come across it in the increasingly large amount of time spent on this subject). I think Bart may have been implying this way, way above in his ‘this is interesting in an academic way but says nothing about global warming’ comment. It makes sense to me as a layperson that the complicated relationship between ocean temperature and the atmospheric climate ‘data generating process’ could potentially obscure any signal from a GHG forcing, presuming there is one.
    Thanks again everybody!

    [Reply: Surface temps over the oceans are included in most global avg temp timeseries. But much more heat is stored in the ocean waters than in the air, and indeed, it’s best to look at the whole picture of global change (also including changes in the cryosphere (ice) and ecosystems, in hypothesizing what it’s caused by. BV]

  630. Marco Says:

    @GDY:
    You may be shocked to learn that there IS a global temperature index including ocean temperatures…

    Perhaps you may want to read this:
    http://data.giss.nasa.gov/gistemp/
    (look for LOTI).

  631. Scott Mandia Says:

    Willis said:

    Fifteen years without rising temperatures? We’ve already seen that …

    Ugh! Why do people persist in this fallacy?

    First off, it is NOT true. 20 of the warmest years on record have occurred in the past 25 years. The warmest year globally was 2005 with the years 2009, 2007, 2006, 2003, 2002, and 1998 all tied for 2nd within statistical certainty. (Hansen et al., 2010) The warmest decade has been the 2000s, and each of the past three decades has been warmer than the decade before and each set records at their end. 2010 is likely to establish a new record.

    Secondly, CO2 forcing is weak so it takes much time to rear its head above the shorter term variability. Why not look at temps since 1979 when satellites were included?

    Notice anything?

  632. Anonymous Says:

    @ Willis

    Willis wrote
    ‘Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities????? ‘

    Dear Willis,

    Sorry if I misunderstood David. But I think you misunderstood me this time. In my previous postings I wrote that no theory can be proven right, including AGW. So the assertion ‘it *must* be CO2’ cannot stem from me, and if it did it would contradict what I wrote. I do not think i said that, and I do not thing that the IPCC said that either. The IPCC writes in terms of likelihood. If ‘scientists’ wrote that sentence or even ‘the science is settled’ that is their problem. Every scientist knows that this can never be achieved.
    This being said, the situation now is that among the competing theories – you mentioned just one, cosmic rays – CO2 has so far the largest explanatory power. It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets. I welcome that other theories be proposed and tested. Cern is running now an experiment to test the cosmic rays theory, and I am curious what comes out of that.

    My point is that the other ‘theories’ you mentioned ( natural variations, unknown factors.. and the like ) are not theories. They are even less testable than Co2. To this category belongs also the ‘theory integrated process’. This is not a theory, it is just a description. To be a theory you would need a mechanism that explains why the global mean temperature could naturally be an integrated processes, what is the magnitude of the random natural variations, by which mechanism these random variations can be translated to a century-long trend, etc, etc. In summary, we need to confront *all* theories, CO2 included. with the same standards of skepticism and see which one fares better.

  633. AndreasW Says:

    Edoardo

    So you mean that you did test the hypothesis for co2 warming? May i ask what test you used and what was the result?

    GDY

    Well if you want to take the path with energy budget, and energy coming in and out, you are not interested in temperatures but heat content. the question you should ask: Is air temperature a good proxy for heat content? The answer is no.
    Another point is if you look at the earth with the south pole in the middle it’s fair to say that for the vast majority of what you see you have no historic temperature record. That means discussing a global average temperature is meaningless. What is more interesting is discussing patterns where you have a decent record. Do you have a camelpattern or a hockeystick. The nordic countries and the us clearly have camels. For starters you could count hockeysticks and camel and see which is the dominant.

  634. JvdLaan Says:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    Eh what about Physics of CO2. Does that not counting anymore?

  635. JvdLaan Says:

    Aaarg, must read: Isn’t that counting anymore?
    starting to form dyslexia at my age ;-)

  636. HAS Says:

    Moving right along from the “he said she said” VS just coming back to inter correlation in spatial datasets my initial interest was in the possibility that the confidence limits in the gridded temperature estimates were understated (as well as being biased in time). However as a consequence of poking around in IPCC WPI and deciding to have a look under the bonnet of a climate change model I do wonder if these issues mightn’t spill over into the validation of the climate models.

    I’m not sure how well this particular model is regarded, but it was the first I found. “Bergen Earth system model (BCM-C): model description and regional climate-carbon cycle feedbacks assessment” (2009) J. F. Tjiputra1, K. Assmann, M. Bentsen, I. Bethke, O. H. Otter, C. Sturm, and C. Heinze. http://www.geosci-model-dev.net/3/123/2010/gmd-3-123-2010.pdf.

    (As an aside for those that are interested the description of the model gives you an insight into the complexity of these models and their assumptions).

    By way of validation they run a base model until it stabilises using pre-industrial atmospheric CO2 concentration to generate a number of parameters that they then compare with the Levitus and NCEP reanalysis of climate. They show the annually-averaged sea surface temperature, salinity, surface air temperature, precipitation and sea level pressure on a Taylor diagram to demonstrate the model performance as a function of normalized standard deviation, centered root-mean-square (RMS), and pattern correlation (see Fig. 1).

    (I should say as another aside that I want to give more to the adequacy of this validation process per se).

    Now if inter correlated spatial data is problematic (as I think you VS were suggesting – and I did try to understand http://www.cemmap.ac.uk/wps/cwp107.pdf but I know my limits) then as I understand it potentially some at least of these statistics being used to validate the model will be over stated.

    Is this a correct assumption?

    If it is and the bias is in some sense proportional to the inter correlation between adjacent points it did strike me that the quality of fit does seem to be inversely proportional to viscosity of the medium through which the measured phenomenon is passing i.e. the gradient might be expected to be lower in those measures that showed better fit. If I’m right about the statistical theory then this is a testable hypothesis around validation.

    Probably making a fool of myself in public, but it’s always worth asking.

  637. HAS Says:

    that was “more thought to the adequacy of this validation process per se”

  638. AndreasW Says:

    JvdLaan

    Eh what about physics of co2. You need more than co2 physics. You need the physics of feedback system and that is poorly understood.

    [Reply: This is a good starting point. And this a good elaboration of the net effect of feedbacks leading to a sensitivity close to 3 deg per doubling of CO2. BV]

  639. Tim Curtin Says:

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed. What conclusions are to be drawn from this may be another matter!

  640. VS Says:

    ——————-

    PLAYING THE ARIMA GAME

    ——————-

    The point that I was trying to make in my previous couple of comments, is that the probability arrived at by Zorita, Stocker and von Storch (2008) is not very informative.

    Allow me to elaborate.

    Zorita et al (2008) assumed the temperatures to be a stationary process, an assumption which, as I mentioned here, is not supported by observations.

    How should we proceed then? Well, let’s constructe a very simple and naive specification by ‘listening’ to the data.

    ——————-

    SPECIFYING THE NAIVE ARIMA MODEL

    ——————-

    Well, first of all, we found here and here, that the temperature series in fact contain a unit root. The calculations of Zorita et al (2008), when applying the Whittle method, in fact independently confirm this (observed) non-stationarity.

    We will therefore model the first difference series, which is stationary (again, see test results).

    since the ADF test equation employed three autoregressive (AR) lags in first differences (see test results), we try out that specification. We simply model the (first difference) series as:

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    The estimation results are given here, coef (p-value):

    ————–

    Constant: 0.006186 (0.1302)
    AR1: -0.452591 (0.0000)
    AR2: -0.383512 (0.0000)
    AR3: -0.322789 (0.0003)

    N=124
    R2=0.23

    We furthermore test the errors for normality via the Jarque-Bera test:

    JB, p-value (H0: disturbances are normal): 0.403229
    Conclusion: normality of disturbances not rejected

    ————–

    Note that the constant term is statistically insignificant (the AR terms are significant at a 1% level). Again, we let our test results guide us, and ‘reject’ the presence of a constant term in the simulation equation. (Actually, we ‘fail to reject the non-presence’, I elaborated on statistical hypothesis testing here)

    We reestimate the model, now without constant:

    D(GISS_all) = AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    The estimation results are given here, coef (p-value):

    ————–

    AR1: -0.438867 (0.0000)
    AR2: -0.368938 (0.0001)
    AR3: -0.308871 (0.0006)

    N=124
    R2=0.22

    We again test the errors for normality via the Jarque-Bera test:

    JB, p-value (H0: disturbances are normal): 0.393751
    Conclusion: normality of disturbances not rejected

    ————–

    Note: adding a fourth AR term adds nothing to the model, in the sense that the coefficient estimate of the fourth term is equal to 0.018457 with a s.e. of 0.092670, which implies a p-value of 0.8425. We therefore choose not to include a fourth AR term.

    ————–

    We then inspect the disturbances of the error term for autcorrelation, and I give you the Breusch-Godfrey test p-values, for a given set of lags (minimum, 2), which takes ‘clean’ distrubances as the H0:

    Lags (p-value):

    2 (0.208611)
    3 (0.245690)
    4 (0.376448)
    5 (0.507945)

    Conclusion: no significant autocorrelation present in disturbances

    ————–

    Finally, we take a look at the estimated standard deviation of the error term (i.e. error(t)), and we find that it is equal to: 0.096399.

    So, what we have here is a very simple and naive model that captures the ‘variance’ displayed by the GISS series pretty well.

    IMPORTANT: This is not ‘The Model’ of ‘The Temperatures’. It is a simple, test-derived, specification that accomodates the observed non-stationarity, autocorrelation structure and disturbance properties of the GISS series.

    ——————-

    SIMULATIONS

    ——————-

    Now, we are going to take our naive ARIMA specification, and ‘generate’ it 100 000 times. Note that, when employing this ARIMA specification, our data do not reject normality of disturbances. Furthermore, the BG test (see above), rejects any residual autocorrelation in the errors.

    We therefore take the liberty of modelling the error(t) variable as normally distributed white noise, with a standard deviation of 0.096399.

    NOTE: The simulated probability here can also be determined exactly by maximum likelihood, as our data generating process is fully specified. However, I’m just too lazy for that right now :) Hence, we simulate. If anybody feels inspired, please do!

    Here’s the Matlab code, with comments:

    %=====================================
    % NAIVE ARIMA TEMPERATURE SIMULATION
    %=====================================

    %Set length of error term, which also implies the number of ‘years’ we want
    %to study. I set our period equal to our estimation sample, namely
    %1880-2008
    d=128;

    %Set number of ‘last years’ you want to compare
    yrs=14;

    %Input estimated coefficients of ARIMA(3,1,0) process, no constant
    a1=-0.438866631230771;
    a2=-0.368937963283039;
    a3=-0.308870699290478;

    %Set number of iterations for simulation
    B=100000;

    %Define vector to store simulation results
    results=zeros(B,1);

    %Initiate simulation
    for z=1:B

    %We generate a vector of normal disturbances, standard deviation set to
    %estimated value (i.e. sd(e)=0.096399)

    e=randn(d,1)*(0.096399);

    %We clear our ‘first difference’ vector, and enter the first three
    %disturbances as starting values
    x=zeros(d,1);
    x(1)=e(1);
    x(2)=e(2);
    x(3)=e(3);

    %Here we innput the first three observed values of the GISS-temp data
    %in our level series, y
    y=zeros(d,1);
    y(1)=-0.2;
    y(2)=-0.22;
    y(3)=-0.24;

    %We generate the first difference series, x
    for i=4:d;
    x(i)=a1*x(i-1)+a2*x(i-2)+a3*x(i-3)+e(i);
    end

    %Here we generate the level series, y, from the first differences, x
    for k=4:d
    y(k)=y(k-1)+x(k);
    end

    %Evaluation code! This part here evaluates the property of the
    %generated series for each iteration. In this particular case, we are
    %comparing the average temperature over year 1881 to 2008-yrs, with the
    %average temperature of 2008-yrs+1 to 2008.

    treshhold=mean(y(1:(d-yrs)));
    last_yrs=mean(y((d-yrs+1):d));

    if last_yrs>threshold
    results(z)=1;
    end
    end

    %Calculate and display simulated probability
    disp(mean(results));

    %=====================================
    % END
    %=====================================

    ——————-

    RESULTS

    ——————-

    Let’s now see what our simple simulations tell us. First we run the program, as given above. We are testing, conditional on the specified (naive, non-stationary) data generating process, what the probability is of observing a higher average temperature over 1995-2008 than over 1880-1994.

    Simulated probability: 0.4967

    Not very impressive. How about if we force a 0.2 degree higher average? The code changes appropriately to:

    if last_yrs>(threshold+0.2)
    results(z)=1;
    end

    Simulated probability: 0.2521

    Again, not very impressive. Let’s now measure the observed difference in temperature means, over the two sample periods. This turns out to be a whooping (statistically significant) 0.546516291. So what happens when we run the following evaluation code:

    if last_yrs>(threshold+0.546516291)
    results(z)=1;
    end

    Simulated probability: 0.0332

    Now, let’s crank it up, and see what the probability is of observing the highest temperature values in the last 14 years of the sample, in the last 14 years. We modify the code again:

    threshold =max(y(1:(d-yrs)));
    last_yrs=min(y((d-yrs+1):d));

    Simulated probability: 0.0020

    Now, this should get us worried, right? Not really, since we were very (very) restrictive in our ‘demands’ here (i.e. the last 14 values all had to be strictly higher than all the values before 1995). Note that the higher the number of ‘restrictions’ you impose, the lower the estimated probability.

    Take the simulation code and instead of ‘testing’ just save the values of the last observation (representing temperatures in 2008). This will generate a 100000 observation long vector, that we can then use to ‘estimate’ both the expected value and standard deviation of the final realization of our variable y. That is, simply replace the whole ‘if’ statement in the evaluation series with:

    results(z)=y(128);

    Below are results from one of the runs. Note that this is a simulation of the distribution of the final value of the temp series, conditional on our DGP:

    Mean: -0.2412
    Std: 0.5189

    Using these values, we can calculate the 95% confidence interval for the final anomaly value in 2008, while starting with the 1881-1883, and assuming an ARIMA(3,1,0) process: -0.2412 +/- 1.96*0.5189.

    This yields the following 95% confidence interval: (-1.258244, 0.775844)

    What is the last observed value in the GISS data? It’s equal to 0.43, which makes it an obedient inhabitant of our 95% confidence interval. In other words, if we just listen to the data (instead of making scenarios based on ‘theory’), our simulation results tell us that observing a temperature ‘anomaly’ of 0.43 in 2008 is not that exciting.

    NB. Note that here I disregard the whole discussion about the reliability of the recent temperature record. If these values are inflated, as some allege, the ‘testing difference’ would be significantly lower than 0.55, and the accompanying simulated probability much higher.

    ——————-

    CONCLUSIONS AND PURPOSE OF EXPOSITION

    ——————-

    First off, for the record, this was a simple exposition, not an academic article, so I hope we won’t engage in trivial nitpicking and fail to see the forest from the trees (e.g. we can debate endlessly about how to handle the 3 starting values). Also, I welcome all who have a better idea on how to do this, to do it and post it here. I’m very eager to see your results.

    Probabilities of events happening are always conditional a certain data generating process (DGP). For these probabilities to have any empirical relevance, the assumptions governing the DGP, must be rigorously tested. This is the main the difference between my results here and those of Zorita et al (2008). While they disregard the implications of test results when constructing their simulations (sorry, but that’s what you guys de facto do), my simple naive specification adhers to them. In other words, I picked a simple specification which is ‘at peace’ with the observations (Please don’t confuse this with this being the specification! …seriously, it will launch an army of ‘strawmen’ from the usual suspects).

    Do note that without rigorous formal testing of DGP assumptions, any simulation result is simply an extrapolated opinion.

    Now, I hope as many people as possible will copy this Matlab code and play around with it. If you spot any errors, do let me know, I have to admit I wrote it down rather quickly :) Also, try simulating a couple of ARIMA(3,1,0) series, and plot the results. This will help you grasp the concept of an integrated (in this case I(1)) series and you will see why it has absolutely nothing to do with how the series ‘increases’ in terms of ‘polynomial order’. After a while you will (hopefully :) also notice that the generated series indeed resemble the ‘variance structure’ of the annual global mean temperature record.

    Finally, I hope this little exposition will induce a good deal of skepticism towards any (ludicrous) ‘probability’ statement such as:

    “The panel concluded that it was at least 90% certain that human emissions of greenhouse gases rather than natural variations are warming the planet’s surface.” Source: BBC News

    As a side note, I’m really curious to learn the identity of the indvidual who came up with this particular insult to science. In addition, if somebody could point me to the method used to derive this ‘probability’, even better!

    Cheers, VS

    PS. Eduardo, the reason I’m doing ‘this’ is because your hypothetical probabilities are cited a bit too often as ‘evidence’ of ‘unprecedented’ warming. If you don’t believe me, take a look around on the net (even in this thread). I simply strongly disagree with the idea that observations imply this particular probability. This post was a demonstration of a part of my arguments. I sincerely hope you take no offense.

    PPS. The careful reader would have noticed that my ARIMA estimation results in fact REJECT the random walk (of the GISS series) hypothesis. For the GISS series to display the random walk property, the following hypothesis AR1=AR2=AR3=0 must not be rejected. I calculate the appropriate Wald statistic for this test, and get the F-statistic, 11.48393, which corresponds to a p-value smaller than 0.0001. We can therefore safely reject the H0 that the GISS series follows a random walk. Note that Alex (answer to Pat Cassen) engaged in a similar excercise much earlier, here. His conclusions were the same: GISS temp is not a random walk.

    PPPS. Before somebody brings it up. I also estimated the ARIMA specification for 1880-1994, these are the results:

    AR1: -0.439312 (sig at 1%)
    AR2: -0.368242 (sig at 1%)
    AR3: -0.332911 (sig at 1%)

    Furthermore, the s.e. of the regression (i.e. the estimated standard deviation of error(t)) is equal to 0.096167. So estimating the model without the last 14 years, and using those coefficients, doesn’t significantly change our simulation inputs (if anything, these results broaden our confidence interval for the anomaly in 2008).

  641. John Whitman Says:

    A Draft Summary Attempt – Rev 0

    So, I am trying to get my head around were the dialog stands now, almost a week after whbabcock summarized the issues. And almost 20 days since the first VS comment.

    ”””whbabcock Says: March 17, 2010 at 16:36 – The issues being addressed in this thread relate to a single question, “Does available real world data support the hypothesis that increased concentrations of atmospheric greenhouse gases increase global temperature permanently?”””’

    whbabcock also says in the same comment:
    ””””What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times. And, it does mean that one can’t simply point to highly correlated time series data showing rising CO2 concentrations and rising temperatures and claim the data support the theory.”””’

    It appears to me that VS’s contention still holds that ‘Beenstock and Reingewertz’ findings are still, as a minimum, a significant ‘red flag’ for AGW theory.

    It appears, to me anyway, that independent analysis by David Stockwell shows the red flag still waving.

    As to the explanation and basis/restrictions of the statistical processes used by ‘Beenstock and Reingewertz’ , there probably (no pun intended) needs to be significantly more continued dialog. This continued dialog is of both educational and critical nature.

    There seems to be insufficient explanation of the physical processes of the climate that could account for ‘Beenstock and Reingewertz’. That should stimulate more research into the physical climate processes. Good thing there.

    Some good focus made on the IPCC scientific method applied to hypothesis testing of natural variation as it relates to justifying IPCC support of CO2 as the cause of AGW. Much more dialog on this could be expected.

    I would appreciate more detailed summaries than mine.

    Bart, thanks again for your wonderful venue.

    John

  642. AndreasW Says:

    Tim

    Now we are talking. Thats the way to do it: study the parametres simultaneously. Now throw in cloudcover data, landuse change and socioeconomic factors and see what you get.

  643. VS Says:

    Nice one, Tim!

    PS W.r.t. my previous post: For the hardcore skeptics, I also estimated the model for the period 1880-1964 (remember, 1964 was the alleged ‘structural break’, also identified via the Zivot-Andrews testing procedure.. you know, when everything changed!), and these are the results:

    AR1: -0.344765 (sig at 1%)
    AR2: -0.332371 (sig at 1%)
    AR3: -0.403615 (sig at 1%)
    Std: 0.088824

    Apart from the expected mutations (as a result of using 3/4 of our series), again we see no radical difference in our estimates.

  644. Gary M Says:

    @ Anonymous/Eduardo

    “It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets.”

    As I understand it, this is precisely why econometrics is the correct tool for statistical anlysis.

    “This being said, the situation now is that among the competing theories … CO2 has so far the largest explanatory power.”

    However according to the analysis by B&R and in this thread VS, the AGW hypothesis has no explanatory power statistically speaking – as I understand it, other statistical anaylsis (e.g. OLS) showing apparent correlation is spurious?

  645. A C Osborn Says:

    Re
    Tim Curtin Says:
    March 23, 2010 at 13:55

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed.

    Tim, thankyou for answering my question.

  646. AndreasW Says:

    VS

    Would be interesting to see what happens if you throw Michaels and Mcitricks (2007) paper in the unit root grinder.

  647. eduardo Says:

    @ VS,

    Dear VS,
    I take no offense, and I am learning from the technical implementation of your tests. I just think that what your are calculations are not explanatory. There is an error in logic.

    Basically what you have done is this:
    -design a statistical model that fits the observed temperature
    -confirm that your statistical model describes the observed temperature, eg. through the probability of record years.

    What have you learned from the functioning of the system ? I think that not much.

    To construct a real theory, it must be constructed so that you can logically convince those that don’t believe it.

    What I think you should have done is the following:
    design a statistical model that describes the *natural* variations of temperature. This means that you can can convince everyone (even me) that you don’t have in your model the anthropogenic contribution. This can be done by fitting the model in a previous period, before the putative anthropogenic influence kicked in.
    – then confirm that the observed 20th century variations can be described by this model as well.

    I think this is a pretty clear logic. Perhaps you can do it, and then I will congratulate you because that would be real progress.

    For clarification, in Zorita et al we did not assume that the observed temperature is stationary, but that the *natural* variations of temperature are stationary. I have explained this already several times (although you keep repeating the wrong assertion), so I will not repeat it again. The interested reader will be able to discriminate between my explanation and yours.

  648. eduardo Says:

    @ Andreas W

    Dear Andreas,

    Usually a theory is tested with experiments. In tis case no experiments are obviously possible so the tests that are used for the ‘anthropogenic global warming’ are based on climate simulations of the 20th century with climate models, which embody the physical process of the theory.

  649. eduardo Says:

    @ David Stockwell

    I was not discussing B&B, but VS, perhaps it is another thread of discussion.

    I really welcome all these statistical analysis as an orientation of what the underlying physics could be. But I would suggest to do it it carefully. For instance, it is very well known that the radiative forcing of CO2 is proportional to the logarithm of the concentration and not to the concentration itself. So what is the rational of using the concentration of CO2 as a regressor for temperature ?
    Further, apart from CO2 there are other greenhouse gases, for instance methane ,which contributes about 1/3 of the total GHG forcing,and whose associated radiative forcing is proportional to the square root of the its concentration.

    [Reply: That is a similar point that I have been trying to make a couple of times: Use the estimated net forcings as a regressor, and account for effects of internal variability as eg caused by ENSO. Buried deep in this thread is a comment from “MP” going that route, which is worth considering. Or using GCM output. Picking only one forcing (albeit the biggest one) is incomplete, as you correctly point out. BV]

  650. VS Says:

    Hi Eduardo,

    Ok, fair enough, I see your point (on the same-sample issue). Let me try again then.

    Now, let’s say one argues that anthropogenic forcings after 1950 in part caused the warming that we then observe as an anomaly in 2008. In that case the anomaly value in 2008 ought to be deviant, considering the (autoco-)variance structure we observe before, say, 1950.

    These are the coefficients that describe the process (same methodology as above) over the period 1880-1950.

    a1=-0.292473426894202;
    a2=-0.361941227657014;
    a3=-0.240667782178227;

    Std=0.084845
    JB=0.397678

    We run the simulation in order to establish the confidence interval, just like in my previous post.

    (-1.2370, 0.7544) of which 0.42 is still a proper element.

    How about we estimate the coefficients on 1880-1935? Surely the forcings then couldn’t have ‘pushed’ around the process that much back then?

    a1=-0.368527004574269;
    a2=-0.392924333253741;
    a3=-0.342340792689471;

    Std=0.085588
    JB=0.684530

    Using those coefficients, we run the simulations, yet again, and we get:

    (-1.1496, 0.6688) of which 0.42 is still an element.

    So, we estimated the model on early data, and projected (using those estimates) towards the future. Again, even if we observe temperatures only betwen 1880 and 1935, and take the structure of the observations, and project them to 2008, there’s (still) nothing exciting going on.

    In plain terms: the temperature anomaly observed in 2008, is perfectly in line with the variance (structure) displayed by temperatures 1880-1935 (or 1880-1950, for that matter).

    Note, also, that the simulated confidence intervals don’t change significantly if you take an earlier (smaller) sample to derive your coefficient estimates (!).

    ***

    Coming from my field, where stationarity is a non-trivial assumption, I really feel you guys are going over it with too many words, and too few tests/proofs.

    So since we’ve been at it in this thread for some three weeks already :), could you please elaborate on how climate scientists ‘deal’ with non-stationarity of temperature data?

    I also have another question (if you can answer it): How many of your colleagues are aware of the fact that these series can only be cointegrated?

    Cheers, VS

    PS. AndreasW: with the references contained in this thread, you could grind them yourself ;)

  651. GDY Says:

    VS – love your rigor. the education continues. My broader point – my suspicion is that Surface Temperature Records OVERSTATES the variance of a true ‘Global Temperature’ Index. The problems with construction of that index are stated above by others. So, if a trend exists in ‘global temperature’, using any variance statistic of the Surface Temperature record will obscure (reject) the trend for longer, right?? Please Statisticians and Scientists, would love to hear your thoughts on my undeveloped suspicion…
    maybe the data behind this study would be relevant?
    http://www.agu.org/pubs/crossref/2009/2009JD012105.shtml

    also
    http://www.agu.org/pubs/crossref/2009/2008JC005237.shtml

    Marco – GISS LOTI includes only Ocean Surface Temps, as far as I can tell. Not the temperature at any depth (recent studies have shown continued warming at greater depths).

    the bottom line is given all the noise in construction of proxies, combined with the underlying natural variability in climate, it may well be impossible on 30years of data (postulated trend time) to conclude statistically ONE WAY OR THE OTHER!
    Given that uncertainty, now what? It certainly cannot be “end of conversation”, right?
    To me, the next question is: what do the physics say??

    Again, I am just an interested layperson – if I am mistaken, please go easy on me…

  652. AndreasW Says:

    Edoardo

    Now wait a minute! You said yourself that a hypothesis can’t be proven only disproven. So you have a bunch of hypothesis about the warming and then test them one by one and if they fails the test you rule them out. Of course you can’t do experiments. That’s why you do statistical testing on the data.
    So you have temperature data and you have co2 data and find correlation. Now you do the statistical testing to see if the correlation is spurious or not.
    So, once more: Did you do any statistical testing of the co2/temperature correlation?

    About the natural variability. I actually do have a “natural” explanation of the recent warming. Atmospheric oscillation changes! And here comes the funny part: My source is the latest IPCC report. You surely can’t find it in the summary for policymakers. See if you can find it.

  653. Willis Eschenbach Says:

    Scott Mandia Says:
    March 23, 2010 at 11:03

    Willis said:

    Fifteen years without rising temperatures? We’ve already seen that …

    Ugh! Why do people persist in this fallacy?

    First off, it is NOT true. 20 of the warmest years on record have occurred in the past 25 years.

    Why would that be relevant? Among other things, that also is also true for every single year from 1948 to 1963 (GISS data). See here, Update 13.

    But more to the point, people persist in what you deem (without a scrap of evidence) a “fallacy” because the lack of post-1995 warming is real. See here for the math.

    When even Science magazine is asking what happened to global warming, it’s probably worth thinking about …

    [Reply: Did you read this actual post? (With the understanding that the trend estimate of the trend is actually larger than estimated from OLS.) BV]

  654. VS Says:

    Hi Eduardo,

    I just saw your answer to AndreasW. I have to disagree with you (again ;)

    “Usually a theory is tested with experiments. In tis case no experiments are obviously possible so the tests that are used for the ‘anthropogenic global warming’ are based on climate simulations of the 20th century with climate models, which embody the physical process of the theory.”

    As far as I know (or perhaps, believe), theories are tested with facts, or in more precisely observations. I cannot imagine how you can put (calibrated!!!) theoretical extrapolations (i.e. computer simulations) over empirical relationships.

    On March 5th, I wrote to Heiko:

    “As for ’statistics’ not being able to disprove a model: that’s a novelty for me. The scientific method, as I was taught, involves testing your hypothesis with observations. Statistics is the formal method of assessing what you actually observe.

    Given the hypercomplex and utterly chaotic nature of the Earth’s climate, and the non-experimental nature of observations it generates, I don’t see any other way of verifying/testing a model trying to describe or explain it.

    Here’s an interesting reading, it is an article published in 1944, in Econometrica, dealing with the probabilistic approach to hypothesis testing of economic theory (i.e. also a discipline attempting to model a hypercomplex chaotic system generating non-experimental observations).

    It is written by Trygve Haavelmo, who later received a Nobel Prize in Economics, in part also for this paper.

    Click to access the_probability_approach_in_econometrics.pdf

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’ :)”

    And to Bart and Alan, I wrote on March 8th:

    “Alright, allow me to elaborate on why statistics is relevant in this case. Let me start by stating that every, and with that I mean every, validated physical model conforms to observations. This is the basic tenant of positivist science. However, usually within the natural sciences, you can experiment and therefore have access to experimental data. The statistics you then need to use are of the high-school level (i.e. trivial), because you have access to a control group/observations (i.e. it boils down to t-testing the difference in means, for example).

    In climate science, you are dealing with non-experimatal observations, namely the realization of our temperature/forcing/irradiance record. In this case, the demand that the model/hypothesis conforms with the observations doesn’t simply dissappear (if it is to be considered scientific). It is made quite complicated though, because you need to use sophisticated statistical methods in order to establish your correlation.

    So correlation is, and always will be, a necessary condition for validation (i.e. establishing causality) within the natural sciences. If you don’t agree with me here, I kindly ask you to point me to only one widely used physical model, for which no correlation using data, be that experimental or non-experimental, has been established. Do take care to understand the word ‘correlation’ in the most general manner.

    Now, I’ve tried to elaborate this need in my previous posts, but I fear that we might be methodologically too far apart for this to be clear, so allow me to try to turn the question around.

    Let’s say that you have just developed a new hypothesis on the workings of our atmosphere. You read up on the fundamental results regarding all the greenhouse gasses, and the effects of solar irradiance on them. You also took great care to incorporate the role of oceans and ice-sheets etc into your hypothesis (etc. etc. i.e. you did a good job).

    Put shortly, you developed a hypothesis about the workigns (or causal relations within) a very complex and chaotic system on the basis of fundamental physical results.

    Now, guys, tell me how you think this hypothesis should be validated? Surely it is not correct ‘automatically’, simply because you used fundamental results to come up with that hypothesis? There must be some checking up with observation, no?”

    I’m very curious to hear what you and Paul_K, or anybody else for that matter, think of the above.

    VS

  655. Willis Eschenbach Says:

    Anonymous Says:
    March 23, 2010 at 11:06

    @ Willis

    Willis wrote

    ‘Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities????? ‘

    Dear Willis,

    Sorry if I misunderstood David. But I think you misunderstood me this time. In my previous postings I wrote that no theory can be proven right, including AGW. So the assertion ‘it *must* be CO2′ cannot stem from me, and if it did it would contradict what I wrote. I do not think i said that, and I do not thing that the IPCC said that either. The IPCC writes in terms of likelihood. If ’scientists’ wrote that sentence or even ‘the science is settled’ that is their problem. Every scientist knows that this can never be achieved.

    Eduardo, thank you for your reply. You are correct that the IPCC never said that. What they do say is that if you remove CO2 from a climate model that is tuned to replicate the past when CO2 is included, it no longer replicates the past. D’oh …

    They then offer this up as evidence that CO2 is the cause of post-1950 warming. See here for an example.

    The fact that this claim is used as “evidence” in what is supposed to be a scientific publication is a sad commentary on the state of climate science. Any tuned model will perform less well if one of the forcings is removed. This shows nothing.

    This being said, the situation now is that among the competing theories – you mentioned just one, cosmic rays – CO2 has so far the largest explanatory power.

    A citation to some evidence would be useful here. Remember that computer results are not evidence … if they were, I’d be a very rich man.

    It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets.

    Surely you see the contradiction between that and your previous statement. If CO2 makes no testable predictions, the explanatory power must be zero …

    I welcome that other theories be proposed and tested. Cern is running now an experiment to test the cosmic rays theory, and I am curious what comes out of that.

    My point is that the other ‘theories’ you mentioned ( natural variations, unknown factors.. and the like ) are not theories. They are even less testable than Co2.

    Since you have already said that the CO2 hypothesis makes no testable predictions, how can a competing theory be “less testable”?

    I also fear that you have fallen into the idea that falsification requires an alternative explanation.

    Next, given that the Constructal Law says that a flow system far from equilibrium (like the climate) has preferred states, how is the idea that the earth has a thermostat “not a theory”? If you have some alternate explanation why the earth’s temperature has stayed within ±3% for half a billion years despite meteor strikes and millennia-long volcanic eruptions and wandering continents, what is that explanation?

    Finally, you did not answer the one question I asked, which was, what would it take for you to give up your belief that CO2 is the cause of the post-1950 warming? We have seen no change in the rate of sea level rise (in fact the rate of rise has slowed lately), we have seen no anomalous warming, we have seen no change in global droughts, we have seen no change in global sea ice (not Arctic, global), we have seen no change in global precipitation, temperatures have been flat for fifteen years … so why do you believe that CO2 is causing changes in the climate? See here for details.

  656. Willis Eschenbach Says:

    AndreasW Says:
    March 23, 2010 at 13:34

    JvdLaan

    Eh what about physics of co2. You need more than co2 physics. You need the physics of feedback system and that is poorly understood.

    [Reply: This is a good starting point. And this a good elaboration of the net effect of feedbacks leading to a sensitivity close to 3 deg per doubling of CO2. BV]

    Well, I looked, and neither post said a single word about the physics of the feedback systems. In fact, the second cite said nothing about feedbacks at all, physics or otherwise. Since cloud feedbacks are widely agreed to be the elephant in the room, this is a serious omission.

    [Reply: The net effects of the feedbacks is what matters in the end, and the last ref is very relevant to that. BV]

  657. JvdLaan Says:

    My question was raised because everyone at that certain