Is the increase in global average temperature just a ‘random walk’?

by

On the previous thread, a discussion ensued about whether the observed increase in global average temperature is just a ‘random walk’. A rundown (*):

– Anonymous commenter “VS” claims that according to some statistical method, the increase in global average temp is not statistically significant, and that global average temperature behaves like a ‘random walk’. Heiko confirms with a simple excel exercise that under the assumption of stacked cumulative errors a quantity can wander off in any direction in the absence of a forced trend.

– The practical relevance of VS’ claim escapes me in light of the graphs shown in the previous post. E.g. each single year of the past 30 years has been warmer than each single year between 1880 and 1930. Calling this merely coincidence makes me wonder, how lucky do you feel? (**) 

– The applicability of said statistic and of the assumption of stacked cumulative errors is questionable in light of the physical nature of the climate: Temperatures continuing to wander off towards warmer values without a change in radiative forcing as the driving factor would cause a negative energy imbalance, which would force the temperatures back to where they came from: Equilibration. There’s conservation of energy after all. In general, long term changes in global avg temp are the consequence of a non-zero radiative forcing, whereas temp juggle up and down without a clear trend if there is no net radiative forcing acting upon the system.

The earth’s energy imbalance as measured from space and as deduced from adding up atmospheric and ocean heat content is actually positive: More energy is coming in than radiating back into space (***). This directly contradicts that the increase in global average temperature would be random (since in that case we would expect a negative energy imbalance)

– Radiative forcing of climate is reasonably well known (at least that of the greenhouse gases and of natural forcings such as changes in the output of the sun; much less so for aerosols). The net forcing is positive, so we know that the temperature is being pushed into the warmer direction. I.e. we know that it in this case the warming isn’t random. The question is then: Could such a warming theoretically be observed even in the absence of a forcing? I think not, for the physical reasons stated above (equilibration). But it’s a bit like asking if the bike could have moved downhill all by itself, even if you see that someone is riding the bike downhill. Interesting question for a late night drink at the bar, but not very relevant to the question of how the bike got to the bottom of the hill. Let me add though that understanding the nature of natural variability in global temperatures is definitely important, and the discussion in the previous thread was definitely thought provoking.

– Changes in atmospheric temperatures are not the only sign of a warming climate. There is the increase in ocean heat content, decrease in Arctic sea ice, thinning of Greenland and Antarctic ice sheets, retreat of glaciers, changes in ecology (e.g. growing season, blooming of flowers, etc), sea level rise, etc. Is this all coincidence? How lucky do you feel?

(*): I’ll admit that my knowledge of statistics is not such that I can argue the details of a statistical analysis. Instead, I’ll argue mostly from a physical perspective. I think that’s entirely appropriate –necessary even- in trying to understand a physical system. Conservation of energy is probably a sufficient reason to dismiss the idea of a random walk in temperatures.

(**): If you feel lucky, you may want to arrange a bet about future warming (or lack thereof) with e.g. James Annan or Brian Schmidt.

(***): Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). These findings provide ”direct experimental evidence for a significant increase in the Earth’s greenhouse effect that is consistent with concerns over radiative forcing of climate

Update: Related discussions of the chaotic nature of climate here, here and here. Tamino chimes in as well.

Advertisement

Tags: , , , ,

45 Responses to “Is the increase in global average temperature just a ‘random walk’?”

  1. Douglas Watts Says:

    Temperatures continuing to wander off towards warmer values … “

    This would be like one half of the water molecules in a glass of water reaching absolute zero and the other half becoming the temperature of solar plasma. Or one half of the air molecules suddenly crowding themselves into one corner of the room, leaving the rest of the room a vacuum. Possible, but improbable.

  2. VS Says:

    Bart, I rather keep the discussion in the context it was started in (although I do appreciate your time for setting up your argument).

    https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1299

    Also, please show some respect to other disciplines, and refrain from calling unit root analysis and cointegration a ‘funky statistical method’.

    See also:

    http://nobelprize.org/nobel_prizes/economics/laureates/2003/index.html

  3. Bart Says:

    VS,

    I removed the word ‘funky’ as per your request. I would appreciate some mutual respect for the physical sciences in return. It’s fine by me to keep the discussion at the other thread.

  4. Gavin's Pussycat Says:

    The critical point is whether there is enough natural, unforced variability in temperatures to allow such a random walk to happen. We know from several independent lines of argument that there just isn’t enough power there.

    One obvious way to test whether this test holds water is simply to use general circulation model outputs instead of real data: can it even detect the presence of a CO2 forcing in the model output, or does it just swallow it up into its noise statistics?

    …and you would still have to explain away why CO2 doesn’t have the effect it’s supposed to have according to physics, which looks just like what we’re seeing…

  5. TCO Says:

    There was a dude called “Chefen” who ran out with some leading stories about how temperature series were just created by random noise. He got linked to from Climate Audit and ran around trumpeting his in progress work. I started asking some simplistic questions (I’m not familiar with the math, but CAN think critically). He blustered and called me an “idiot”. but it turns out…that I asked the critical question. What he had done was fix the average AND fix the start point…which duh…basically fixes the trend!

    He did actually admit the error (rare for a blogger)…although wouldn’t quite crawl down from some of his rhetoric against me. Then he said he was working on resurrecting the theory. He ran with that for over a year (no updates…too busy, see!) Then his blog disappeared. So he got to have his say and roil the waters a bit…and it’s not even citable, contestable, now!

  6. sidd Says:

    I just posted this in the last thread, but this one seems fresher:

    “the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent.”

    Let me see. I take a diode, biased in the exponential region, and put a varying current through it. I measure both the current and the voltage drop across the diode and discover that I is exponential and V is linear, therefore they are “asymptotically independent”

    sidd

  7. VS Says:

    For reference:

    http://motls.blogspot.com/2010/03/tamino-vs-random-walk.html#more

    The discussion we are having, however, is still alive in Bart’s previous thread.

  8. dhogaza Says:

    And Motl’s been discussed in the previous thread. VS is apparently impressed by the strawman army Motl builds up then knocks down because he’s unaware that the post is simply a series of “real science would do …” statements that climate science actually *does* do …

    Further revealing VS’s ignorance of climate science …

  9. Al Tekhasski Says:

    Bart, you are basing your construction on an alleged fact that there is a strong “measured” radiative forcing. You refer to a blog that refers to an article by Murphy et al (2009)

    However, if you critically examine the article, you would find out that the “measured” warming could be linked to only 1/10th of the theoretical forcing from CO2 increases. The rest 90% they break into various highly dubious things like “volcanic aerosols” and “tropospheric aerosols” (aren’t they mostly a lingo for clouds?).

    The abstract also contains goofy things like “About 20% of the integrated positive forcing by greenhouse gases and solar radiation since 1950 has been radiated to space.”, because greenhouse gas forcing cannot be “integrated” and later “re-radiated”, since they are the same thing, instant radiant fluxes.

    Given that estimation of “ocean heat content” is based on even more undersampled and less reliable thing as compared to surface data, and all aerosol business is highly “uncertain”, the speculations about where the warming actually goes can be stretched to suit any wishful idea. Additionally, if someone devotes the precious tiny space of article abstract to mention conformance to IPCC opinion, this immediately raises a flag, at least to me.

    Regarding the satellite proof that OLR is smaller today than in 1970 is dubious, they did not have proper resolution of the instrument back in 1970. I have not looked into backradiation measurements, but all these estimates has to be taken with a big skepticism because global averages cannot be reliably estimated on the basis of vastly undersampled grid of sensors. All these “measurements” also suffer from another loosely-specified and evasive parameter – cloudiness, which is prone to subjectivity in selection of data. For example, the Griggs-Harries results are obtained after rejection of 99.3% of data from 1970.

    Obviously, the search for greenhouse forcing signature requires more research, but one trend seems clear – when studies become deeper, less effect seems to be attributable to CO2.

  10. Bart Verheggen Says:

    Al T,

    Speculations (about where the warming actually goes in this case) can NOT be stretched to suit any wishful idea. The whole point of science is to constrain the strectch.

    Not knowing everything is not the same as knowing nothing.

    Stop the handwaving.

  11. Al Tekhasski Says:

    Bart,

    It is true that not knowing everything is not the same as knowing nothing. But not knowing behavior of a factor that has 30-times more effect on climate than elusive greenhouse forcings in not excusable in science. Not knowing what dictates the amount of cloud cover on Earth and assume it as a constant and free-fudge parameter is unscientific. Ignoring lack of historical data on this parameter but claim that models correctly reproduce past climates is garbage science. Deriving trends in chaotic meteorological fields using 36 sensors while one needs 800,000 of them is junk science. Deriving scary warming scenarios out of these trends and models is a scientific mistake. Advising governments on this
    basis is idiocy.

    You said, “speculations cannot be stretched”, while the article of Murphy (2009) is a perfect example – they managed to stretch the theoretical forcing of CO2 to 1/10 of it, to fit the ocean data.

    I have formulated several concrete points why the climatology should not have the confidence they claim, (a) undersampling of data by a factor of 100, (b) lack of prognostic model for cloud cover formation and its role in greenhouse effect, and (c) inherent uncertainties in mapping of 3-dimensional cloudiness/haze into one-dimensional OLR.

    If you call this “hand waiving”, fine.

  12. dhogaza Says:

    Not knowing what dictates the amount of cloud cover on Earth and assume it as a constant and free-fudge parameter is unscientific.

    What annoys me most of all are people blowin’ it without knowin’ it …

    From the GISS Model E documentation:

    Cloud processes

    CONDSE is a driver that sets up the vertical arrays for the column models for moist convection and large scale condensation, and accumulates diagnostics and output for the radiation and other modules.

    Moist convection

    The moist convection routine is a plume based model (Yao and Del Genio, 1995) that incorporates entraining and non-entraining plumes, downdrafts (which can also entrain environmental air), subsidence (using the quadratic upstream scheme).

    Large scale condensation

    The main cloud generating routine LSCOND is based on Del Genio et al 1996, with some modifications to improve the simulation of the nucleation of super-cooled precipitation and the estimate of near-surface cloud formation in very shallow pbl conditions.

    Clouds are modeled, it’s not as you describe.

    I won’t do the blow-by-blow dismantling of the rest of the your post, but if you think anyone knowledgeable is going to be swayed by misrepresentations, you’re mistaken.

  13. Al Tekhasski Says:

    dhogaza, you must be confused. These subroutines are still deeply parametrized at sub-grid scale, and therefore still result from some sophisticated curve fitting to observational data.

    “Even with the rapid pace of improvement of computing
    power, it is impossible to resolve the important subgrid
    structure and the spectral information of clouds in the
    foreseeable future for global climate model simulations.
    As a consequence, simple subgrid-scale models and
    aggregated spectral hydrometer information are still
    needed to parameterize clouds.” – doi:10.1029/2002JD002523

    Having stated the above, here is a question for you: why, with all this fine modeling efforts, the ModelE arrives at cloud cover at about 58%, while ISCCP observations show 69%? Gavin would said it is “good enough”. But allow me to remind you that 4% of change in cloud cover creates a radiative imbalance that dwarfs the (alleged) effect of entire CO2 doubling? Yet the ModelE is 11% off reality, which shoud be equivalent octupling of CO2. How would you evaluate the miserable effects of greenhouse gas variations if you miss so much in basic mechanics?

    I have nothing against efforts to improve models with more and more sophistication and computing power. The efforts are impressive. However, whoever drives these efforts must clearly comprehend that all their parameterised entrainment of plumes are no more than scientific toys. Until the global coupled models are able to produce realistic weather events at realistic time-space scales without blowing up, and produce consistent convergence of features with decreased grid size, no matter what you say will be a subject of curiosity, and cannot be used for justification for driving gasoline prices up through carbon taxation.

  14. philc Says:

    Bart, you seem to miss the basic point of all the statistical arguments. In order to understand what is going on, you have to understand the data. Barring a sound, from-first-principles model the ONLY tool available is statistics. We have this bunch of numbers. What do they mean. VS has shown commendable patience in trying to teach us all some statistics, which are sorely lacking in almost all the climate science research.

    Given the chart of temperatures shown everywhere(GISS, CRU, HCNC) the question is, will the next point we measure be higher or lower than the last point, and by how much? As VS points out, the statistics say that cannot be predicted because the data have no predictable trend.

    The only reliable prediction anyone can make, looking at those graphs, is the old weather forecaster’s trick- “tomorrow’s temperatures will be about like todays, maybe a little closer to the average.” In other words, autocorrelation. The closer in time you make two temperature measurements, the more likely they will be close together.

    Until the climate science community starts really working from data, rather than supposition, we will continue to be in trouble.

  15. Bart Verheggen Says:

    philc,

    Climate models are physics-based. The observations have to conform a coherent framework. You seem to miss my point: Regardless wether the naked number would be consistent with a random walk, observations and our physicalunderstanding of the system (including such things as conservation of energy) clearly point to the trend not having been random.

    You further miss the point that it is *not* about next year’s datapoint. It’s about the long term trend.

    Until the opposition to science starts working from a coherent framework to try to understand all the data, they will continue to not be taken seriously.

  16. Pat Cassen Says:

    Al – Claiming that “Deriving trends in chaotic meteorological fields using 36 sensors while one needs 800,000 of them is junk science” suggests to me that you either did not read Wild’s review paper past the first page, or that you read it but didn’t understand it.

    You yourself referred to the Palle earthshine data. Did you see the update on that, cited by Wild?

    Or are you just dismissing that paper because…?

  17. dhogaza Says:

    First Al says this:

    Not knowing what dictates the amount of cloud cover on Earth and assume it as a constant and free-fudge parameter is unscientific.

    Cloud cover is a constant and that constant input as a parameter …

    Then:

    Having stated the above, here is a question for you: why, with all this fine modeling efforts, the ModelE arrives at cloud cover at about 58%, while ISCCP observations show 69%?

    He in essence states that cloud cover percentage is an emergent property, and wrong. But if they were doing as he first said, they’d just parameterize the subgrid processes constrained by the 69% observation.

    (I don’t necessarily trust his claims regarding the results vs. observation in the first place, but this contradiction is interesting).

    dhogaza, you must be confused. These subroutines are still deeply parametrized at sub-grid scale, and therefore still result from some sophisticated curve fitting to observational data.

    And apparently fitting to observed data is unscientific? Sort of like Newtonian mechanics was totally uses because he didn’t understand what gravity *is*, but only its effects on a non-relativistic scale based on physical observations?

  18. dhogaza Says:

    And of course the parameterizations are much more complex than Al’s indicating, and used in a much less simplistic way than he states.

  19. Al Tekhasski Says:

    dhogaza: no, there is not much of contradiction. IPCC uses two dozen models as their basis, and it appears that only two-three extremely recent one started to use more sophisticated cloud models/parameterizations. Instead of looking for contradictions in quick blog posts, you better address the real question – why would the “emergent property” (what a terrible self-invented terminology!) emerges 11% off observations, which should induce an imbalance equivalent to octupling in CO2?

    Or maybe more… For example, if you try this full-spectrum model, http://forecast.uchicago.edu/Projects/full_spectrum.html
    you will find out that 1.4% increase in low clouds completely negates warming from CO2 doubling.

    Regarding fitting parameterizations to observed data. You do not limit your model parameters to molecular viscosity and thermal conductivity, do you? Then yes, there is one little difficulty. If you need your parameterization to work over climatological time scales, don’t you think that you would need to use data at least over the same time scale, and not some 5-years-long sporadic data set from only handful of locations? Ideally, the model should re-create all “prognostic properties” of climate down to ice ages. Do you have global cloud cover data for the past 400,000 years to determine your correct parameterizations?

    Speaking back on topic, why there are so many ground stations that show opposite trend in temperatures over the past 100 years, while being just 50-70km apart?

  20. Al Tekhasski Says:

    Pat Cassen Says:

    <>

    And your remark suggests to me that you are not familiar with Shannon-Nyquist sampling theorem, or don’t understand the fundamental importance of it for data acquisition.

    <>

    Yes, I saw some re-analysis that led to correction of abnormal spike for year 2003. So what?

  21. RedLogix Says:

    Al T;

    Seems to me that you are arguing for 800,000 spatial measuring stations in order to properly sample the earth’s temperature on a scale that accounts for the 50km spatial variance of normal weather patterns. Please correct me if I’ve read you incorrectly.

    Your point regarding the Shannon-Nyquist sampling theorem would make more sense to me if it was spatial analysis we were concerned about. But I was rather under the impression the whole AGW/CO2 thing was primarily in the time domain, ie excess fossil CO2 was rising over the last 150 years and … well it strikes me that most thermometers are being read a lot more often than that. More than enough to satisfy poor old Nyquist and Shannon.

    Or are you arguing for some unsuspected cross-correlation between the spatial and temporal? That the temperature stations we are reading just happen to be in places where the temperature is increasing, and are missing all those locations where it is decreasing? Surely not.

  22. dhogaza Says:

    PCC uses two dozen models as their basis, and it appears that only two-three extremely recent one started to use more sophisticated cloud models/parameterizations.

    This is a weakness, IMO fallout from the “every country has to agree” mode of operation of the IPCC.

    I’ve read that in this round, the models that are accepted are going to be trimmed in number. I haven’t read how they intend to get agreement to that by member countries pushing their own researchers’ models.

  23. dhogaza Says:

    Well, the professionals who designed the UCRN believe that 100+ sites are sufficient for monitoring surface climate in the US.

    For understanding the physics of cloud formation, etc etc, you don’t need to sample the entire planet. You can understand it one big thunderhead at a time, so to speak.

    So I really don’t know where this claim comes from.

  24. Al Tekhasski Says:

    RedLogix: The fact is that some stations show decreasing temperatures (over a century), and come stations show increasing temps. For example (for Texas), Meeker location shows downtrend (-0.13C/c), while Guthrie location shows positive trend (+0.6C/c).
    Another exmple, Lubbok shows (+1.7C/c), while Crosbyton is down, (-0.4C/c)
    (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?lat=33.65&lon=-101.25&datatype=gistemp&data_set=1)

    The thing is that the pair in the first example is 60km apart, and second stations are only 53km apart. It means that these pairs see the SAME SKY, and therefore the same average backradiation! See the problem now? If main reason for temperature trend is CO2 increase and corresponding (alleged) imbalance, these closely-located stations MUST show comparable trend, at least the direction must be the same. It is not.

    This means that there is substantial spatial variation in temperature field and trends that is unrelated to CO2. Maybe there are locations in between, where the tend is +2C/c? Or (-2C/c)? How do you know? One thing is clear – current grid of stations is too coarse.

    One might think that the stations are located randomly. But how do you know? The question is, how representative are these stations? The only reliable scientific way would be to increase the number of stations by 4x, 16x, 64x, and show that the global trend approaches some asymptotic, regular value. Alternatively, one can try to pick up a lesser number of stations in a sort of random way, but we already know that the initial grid is insufficient and non-representative. Even now we have huge data scattering; with less stations it will be complete havoc.

    Technically speaking, the near-randomly fluctuating spatio-temporal field must always have several points (even, say, 100) that would show exactly the same trend as all 800,000 globally averaged. I would think that there must even exist a single location that shows “correct trend” over a recorded period of time. The question only is where is that location. Even then there is no guarantee that the trend at this lucky location would follow the global average over extended periods. This concern should answer the “dgogaza” remark about “professionals” and their belief in “100 sufficient stations”.

    Cheers,
    – AT

  25. PaulM Says:

    I am not sure whether Bart understands random walks.
    There is nothing at all surprising about each of the last 30 years being
    all higher than the period 1880-1930.
    This is quite likely to happen in a random walk.
    See for example the web page of Briggs ( a professional statistician ) at
    http://wmbriggs.com/blog/?p=257

    [Reply: I’m not sure whether you understand physics. Read my newer post. BV]

  26. David S Says:

    What an interesting thread! I don’t understand much physics, but I do know that however clever a model. if it has inadequate predictive power, the physics must be wrong or at best only partly understood, and simply recalibrating is effectively cheating. The statisticians here disagree as to whether the models do the job, and some of the physicists don’t seem to care either way about the stats. They should.

    [Reply: This discussion has no relevance for about climate models Bart]

  27. Pat Cassen Says:

    Al –
    We started talking about measurements of albedo trend, or, more importantly in the present context, trends in SSR. If you are rejecting the data in Tables 1, 2 and 3 in Wild’s review paper (JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 114, D00D16, doi:10.1029/2008JD011470, 2009) on the basis of strict application of Shannon-Nyquist, I think you’re missing the point: RedLogix (March 16, 2010 at 11:23) has it right.

    As for temperature trend correlations with distance, people have been worrying about this since Hansen and Lebedeff (JGR, 92, 13,345, 1987) (cited 531 times, according to google; this is a well-studied subject). I find no serious objections to their analysis in the literature, but maybe I missed something. They state “The temperature changes at mid-and high latitude stations separated by less than 1000 km are shown to be highly correlated; at low latitudes the correlation falls off more rapidly with distance for nearby stations.” See their Fig. 3. Again, we’re talking about trends. If you have a good analysis that challenges these conclusions, write it up. There will be many interested researchers.

  28. PaulM Says:

    What is that supposed to mean? What newer post are you referring to? What makes you doubt my understanding of physics? I thought this post was about random walks? In fact I am not even sure whether you understand elementary probability – your last points about ice melting are related to warming, not separate coincidences.

    As for not understanding physics, your remark about conservation of energy hits the mark there – in what sense is the energy of the earth conserved?

    Please listen to the comments of VS, philc, Lubos Motl and Matt Briggs. Climate scientists need to apply proper statistical techniques to their data. Until they do so, they will not be taken seriously by scientists outside their clique.

  29. Bart Verheggen Says:

    PaulM,

    I meant this post itself, sorry. Re the physics:

    Basically, a random walk towards warmer air temps would cause either a negative radiative imbalance at TOA, or the energy would have to come from other segments of the earth’s system (eg ocean, cryosphere). Neither is the case. It’s actually opposite: a positive radiation imbalance and other reservoirs also gaining more energy. Which makes sense, in the face of a radiative forcing.

    Thus, on physical grounds it seems clear that the increase in global avg temp over the past 130 years has not been random, but to a certain extent deterministic. It’s a consequence of the basic energy balance that the earth as a whole has to obey to.

    Whether the ‘naked values’ in the absence of any physical meaning or context have a unit root or not (VS seems to have backpedaled from pure randomness) is a purely academic mathematics question. It bears on how to analyse the data indeed, but it doesn’t change the physics.

    DavidS,

    This has nothing to do with models.

  30. Al Tekhasski Says:

    RedLogix, you wrote:
    “Or are you arguing for some unsuspected cross-correlation between the spatial and temporal? That the temperature stations we are reading just happen to be in places where the temperature is increasing, and are missing all those locations where it is decreasing? Surely not.”

    Surely there is a good possibility. You probably are aware of UHI. In fact, nearby presence of people (agriculture, greenbelts, housings) acts in warm direction. Now consider this: climatologists did not plan positions of those met stations, nor did they specify accuracy or sampling requirements for thermometers. They are simply using what is historically available. But what was the original purpose of all these stations? It was to serve people, to advise about weather, what to wear, etc. Therefore, dominating number of met stations is historically placed near people, where people live. And most living places have tendency to expand. Therefore, ALL stations must have some UHI bias, all.

    Interestingly, there are many stations in GISS database that stopped to report data (or data are not collected?). In Texas, a quick look gives you this:
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722600000&data_set=1&num_neighbors=1
    id=425722540030
    id=425722570040
    id=425722490020
    id=425722600030
    id=425722590050
    id=425722430030
    id=425722430030
    id=425722530050
    id=414763420030

    All these stations reported a serious decline in temperatures.
    What kind of conclusion one would draw out of this?

  31. Al Tekhasski Says:

    Pat wrote:
    “on the basis of strict application of Shannon-Nyquist, I think you’re missing the point: RedLogix (March 16, 2010 at 11:23) has it right.”

    No he has not. I explained in details, some spatial locations have uptrend, some have no trend, and some have downtrend. Therefore, the spatial selection obviously affects the global result. We are dealing with spatio-temporal chaotic field, and therefore Nyquist-Shannon must be applied in all dimensions of the object.

    Speaking about sampling rate of temperatures, sampling min and max of it twice a day does not satisfy the sampling requirement – this sampling is complete BS from technical standpoint, a total unrecoverable screw-up of data.

    Regarding the Hansen and Lebedeff statement about spatial correlation, experimental data from his own database do not support it at all. Look at the data yourself, say, near Canadian border. Examples:

    Worland -up, Worland Ap – down, just 6km apart;

    Powell Field station – up, Crow Agency -down – 137km apart

    Yellowstone Park Mammoth – up, West Yellowstone – down, 48 km apart

    Anaconda – up, Philipsburg RS – down. 34km apart.

    It is obvious that the Hansen-Lebedeff assertion about correlation is untrue.

  32. Scott Mandia Says:

    Al,

    How much UHI occurs over the oceans that cover 70% of the Earth? Satellite-derived global temp trends are fairly close to those from surface obs.

    Essentially, the rising tide lifts all boats cliche’.

    UHI and siting have been studied and UHI is accounted for. Most studies show that the trend has not been influenced be UHI nor siting despite the claims of Watts, D’Aleo, and others.

  33. Al Tekhasski Says:

    Scott: “satellite-DERIVED global temp trends are … ”

    The keyword here is “derived”. Given the number of corrections, adjustments, weighting functions, calibration target drifts, instrument surface deterioration, etc., one can derive anything from brightness channels.

    “UHI and siting have been studied and UHI is accounted for” is a classic AGW anecdote. Just like “bad float cold bias” was accounted for. You cannot account for something when you have no trusted and known-accurate reference at the same time and spot.

    The GISS data I posted earlier have not just a random variation in time, they have a systematic trend nearly century long, over the entire record length, all in one direction. Unfortunately, this direction could be opposite even if sites are only 6-60km apart. All this allows to conclude with full confidence that all these data are just garbage, and all statistical exercises are meaningless. Garbage is garbage no matter how much sense one is trying to make out of it, even trying very hard.

    GISS data also reveal that a substantial number of stations have been dropped from contribution to global temperature index. Many of these stations exhibited a very strong “cooling trend”, see examples above. It does not look like these counter-trends have any radiation-based physics behind, which, by association, sheds serious doubts about nature of individual local “warming” trends. I insist, UHI or non-UHI, it is all garbage, and no confident conclusion can be derived from this, at least not to the extent that “climate is in crisis”.

  34. Pat Cassen Says:

    OK, Al, you have looked hard at the data and I have not. So why are you wasting your time here arguing with Scott and me instead of writing this stuff up and submitting it to a journal? There are lots of folks who would be interested in seeing that Hansen and Lebedeff, and twenty subsequent years of analysis, are garbage.

    In the meantime, I’ll keep your points in mind, but tempered by the fact that you’re just some guy posting on a blog (like me), and other conclusions have been reached by those who have published their analyses, using methods that I can understand.

    By the way, I presume that you have similar problems with all the other indicators of warming (polar ice, glaciers, phenological). But no need to elaborate.

  35. Al Tekhasski Says:

    Pat, I don’t have problems with indicators of warming. I only have a problem with attribution of them to change in greenhouse gases.

  36. Paul_K Says:

    Bart,
    My congratulations to you on an excellent thread, and more so for allowing a truly open debate.

    You wrote:
    “Whether the ‘naked values’ in the absence of any physical meaning or context have a unit root or not (VS seems to have backpedaled from pure randomness) is a purely academic mathematics question. It bears on how to analyse the data indeed, but it doesn’t change the physics.”

    I believe that you are still missing the essential point being made by VS and others. It is possible that the physics are correct, but how do you go about demonstrating that?

    What VS has been saying (unambiguously in my view despite some misunderstanding by some posters) is that statistical rigour needs to be applied to any estimation of confidence in a postulated relationship. The statistical tools for the analysis of correlation in time-series were developed specifically to avoid spurious claims to correlation and/or inappropriate estimates of confidence or residual error in such correlation.
    In this context, it matters not a whit that the temperature time series is controlled by physics. It does matter however that inappropriate statistical methodology is used to claim a level of confidence in a correlation which is simply not warranted given the limitations in the data themselves.

    [Reply: VS’ thesis has consequences for trend analysis, but it doesn’t impact the causal relation between CO2 and temp. ALso note that the rootiness and randomness are not as unambiguous as you make them out to be. Check eg this list (h/t Pat Cassen) and Allan’s subsequent posting of some abstracts. BV]

  37. Peter Wilson Says:

    Bart

    Great thread, but I am little disturbed by a remark you made a while back, to wit:

    “Until the opposition to science starts working from a coherent framework to try to understand all the data, they will continue to not be taken seriously.”

    I will ignore for now the idea that opponents of AGW need to work form a “coherent” framework, as if we were all singing from the same hymnbook. What disturbs me is that you refer to sceptics as “opponents of science”

    Bart, you are NOT science. The IPCC is NOT science. The NAS is NOT science. Science is a method of attempting to understand the world through interrogation of nature, by the formulation and impartial testing of hypotheses. To identify your position as “being” science is preposterous, and insulting to the many researchers who are trying to honestly understand the climate we live in, but come to different conclusions than yourself.

    Such arrogance is itself profoundly un-scientific, and I invite you to reconsider your position on this matter.

    [Reply: We seem to build on a similar argument, but from the diametrically opposed direction. I don’t claim to “be science”. But indeed, science is abut following the scientific method. See this great presentation of how climate science stacks up to the different scientific methods/criteria (start at slide 30 to jump to the philosophy of science part). Or this book chapter.

    I stand by what I wrote. Most “skeptics” lack a coherent framework, and only arrive at their (predetermined?) conclusions by omitting a large part of the data/the physics/something. Or by logical fallacies such as confusing uncertainty with knowing nothing. Or by scientist-bashing to lower the credibility of science. There are a lot of non- and even anti-scientific tactics being used. I know there are exceptions; there are people who honestly raise questions. I don’t mean to generalise that any and all criticism falls under this category; of course that is not the case. But most claims that AGW is all wrong or that most scientists are manipulating their data/research do indeed fall under this category. A claim that a whole scientific field is wildly wrong based on innuendo, false accusations and some emails is what I would call arrogance.
    Now I realize that we wholeheartedly disagree on that. Let’s leave it at that: We agree to disagree. Repeating our respective arguments ad nauseam doesn’t serve any purpose. I realize that my arguments won’t convince you, and likewise your assertion that a whole scientific field is wrong doesn’t convince me. BV
    ]

  38. docmartyn Says:

    “Bart Verheggen Says:
    Basically, a random walk towards warmer air temps would cause either a negative radiative imbalance at TOA, or the energy would have to come from other segments of the earth’s system (eg ocean, cryosphere).”

    You appear to have forgotten something that is important, we live on a planet that rotates around its own axis and around the sun. The anomalies present in the ‘average temperature’ are dwarfed by the changes in the temperature, anywhere on the planet, during the day and night cycle.
    Average temperature is a construct, it does not exist in the physical world and does not exist in time space (as it is constructed from two measurement 10-14 hours apart).

    You can not treat it as a an actual temperature and plug it into equations for a ‘negative radiative imbalance’. This is nonsense, the sum of radiation that is radiated from one cannon ball at 0 degrees C and one at 100 degrees C is NOT the same as a pair of balls both at 50 degrees C.
    Absolute temperature is real, and is a major component in a wide range of thermodynamic processes. An average temperature or an anomaly from some baseline is not real in any physical sense. Now think of ‘average temperature’ as money and absolute temperature as gold; sometimes that are interchangeable sometimes not; gold exists, its made up of atoms; money does not exist except in peoples heads, money is not real, it does not physically exist.

    [Reply: Could you try to make some sense? BV]

  39. Peter Wilson Says:

    Bart

    It seems we agree on what constitutes science, but differ radically in our perceptions of what is actually going on in the real world (both natural and human). Fair enough. If recent events haven’t changed your view, I guess nothing will.

    When we are old men, we will know which view is correct. I’m OK if I’m wrong, I hope you are too.

    Thanks for taking the time to debate with us sceptics though. In that respect at least you show yourself to be in a different league from most alarmist sites.

    [Reply: Hey, I’d love to be wrong! BV]

  40. VS Says:

    Hmmm just saw I posted this in the wrong thread, so now in the right thread:

    ———————

    Hi guys,

    For the record.

    I formally tested (see the ‘PPS’) the H0 that the GISS temperature series follows a ‘random walk’. There is also a link there to Alex’s post where he performed a similar test.

    We both firmly rejected the H0 of a random walk.

  41. Luke Skywarmer Says:

    One thing to the “heat balance” If we do know this exactingly then why can you not explain the ice ages every 90,000 years and then back out for 10,000? easy right?

  42. phinniethewoo Says:

    might have misunderstood this , but i think Roger Penrose explained once that plant life converts high-frequency solar radiation into low-frequency..it was an entropy thing and the key for life

    Could this be the reason why there is more IRradiation? We know there is more plants around (10% more than 20y ago I read somewhere)

    The reason for more plantlife is explained by Lindzen: Basically plants are starved from CO2 nowadays; plants developed long time ago in atmospheric conditions that had far more CO2..if you want your greens to grow faster: spray them with CO2..
    http://www.dw-world.de/dw/article/0,,2214482,00.html

    Plants are an excellent collector of statistical information: they grow on a basis of A1 *CO2+A2*H2O+A3*fertiliser+A4*Solar+A5*temperature. Now, I believe with cunningly used OLS methods, maybe even TSA and co-integration , we can retrieve a good temp out of this, and finally know how warm it is !! because we can’t with thermometers so far ‘s for sure… Co-integration people, don’t forget it now!

  43. phinniethewoo Says:

    I forgot to make a graph now?
    Ah well .
    Next time.

  44. This is Says:

    Thanks for the work. Post assisted me a whole lot

  45. The Climate Change Debate Thread - Page 3957 Says:

    […] data has been analysed by various. moyhu: On Polynomial Cointegration and the death of AGW Is the increase in global average temperature just a ‘random walk’? | My view on climate… Not a Random Walk | Open Mind Still Not | Open Mind I'm not endorsing these – I am unable to vouch […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: