The dietician responds

by

Bart claimed that his weight in insensitive to his food intake, because a stochastic model of his body weight can not be rejected.

As a dietician, I beg to differ with this conclusion. The fact that one can construct a stochastic model that envelops the observed evolution of his bodyweight is by itself not very informative. Many finite datasets of a quantity that is known to be physically driven can probably mathematically be described by a stochastic model.

If a few years from now Bart’s bodyweight increases to outside the 95% confidence interval of this stochastic model, the current model would be falsified. No problem. It’s probably possible to construct another stochastic model, tested not on the period 1978 -1992 (1880-1935), but e.g. on 1978 – 1994, or whatever period works, to obtain an even broader envelope of potential outcomes. (After all, if the model is based on a period where Bart’s personal energy balance already led to an even bigger increase in his bodyweight, it will probably widen even more in potential outcome). So even if Bart wanted to try to falsify this hypothesis by e.g. eating as many brownies as he can over next years, chances are the hypothesis could be easily amended to still encompass his new weight.

Conversely, if the model was tested on 1978 – 1988 (1880 – 1920), what would it have looked like? Would the upswing in Bart’s bodyweight in recent years still be within the 95% confidence interval? Perhaps it would, but I wouldn’t bank on it.

In any case, the stochastic model predicts equal chances of Bart’s body weight to increase or decrease, irrespective of Bart’s eating, sporting, and other relevant habits or state of health. That runs counter to the accumulated knowledge of the human body, not to mention to conservation of energy. How much did you weight as a baby? According to the stochastic model, Bart’s body weight could as easily have turned out to be 60 kg rather than his current 100 kg, despite his eating habits. That’s preposterous.

I don’t claim to be able to predict Bart’s bodyweight to the gram, but I do claim to be able to make a more skillful prediction than this stochastic model. Namely, if I’d have access to data on his relevant habits (eating, drinking, sporting, sickness, state of metabolism, etc), I could explain within certain boundaries (much tighter than the stochastic model boundaries) how his bodyweight changed over the past decades, and why.

What’s worthwhile in this context is to have an explanatory model/framework for your bodyweight. If you change your eating habits to such and such, how is your bodyweight likely to respond? That’s an important question. An answer that your bodyweight is insensitive to what you eat, and that it could vary anywhere between -25 kg and +25 kg of your current weight is uniformative to the extreme.

According to the stochastic model there are no explanatory, deterministic variables for your body weight; it just varies within very wide bounds. As such, it is an essentially meaningless prediction. Choosing to believe this model gives you the benefit of eating to your heart’s content, presumably without it influencing your body weight. Actually, your body weight may tend to go down as it approaches the less likely boundaries of the prediction interval. Even when eating all those mars bars! I don’t blame you for wanting to believe in it.

However, I would urge you to take my thoughts into consideration when deciding about your eating habits. But in the end, it’s your choice; it’s your body after all. You chose how to deal with your own body.

At this crucial point the analogy breaks down.

About these ads

Tags: , , ,

41 Responses to “The dietician responds”

  1. mspelto Says:

    Glaciers like to eat snow and ice, and all the work of sitting outside trying to stay cool and move, like we do causes them to shed snow and ice. Since the global glacier mass balance has been negative for the last 19 years, and increasingly so for the last three decades, does this suggest they lost their appetite or are just working harder to stay cool? In the North Cascades for example in the last decade winter precipitation has been 110% of normal, so we can conclude the mass balance loss, -10 m, has been due to working harder to stay cool. We could just wait to see how the glaciers do this year, but the key to a good diet program is forecasting. With this in mind as is the case each spring we forecast the mass balance of North Cascade glaciers , and this year determine it will be negative. This diet of simply working harder to stay cool is simply forecast to not be the right choice for a glacier that has been consistently losing too much mass. A new plan is needed.

  2. mikep Says:

    There seems to be a confusion here. Just because a time series shows a stochastic trend does not mean that it is not determined by some explanatory factor. What you are looking for are some variables that co-integrate with Bart’s weight. the danger of “standard” methods is that they pick up nonsense correlations. The stochastic models of one variable are merely descriptive, not explanatory. You just need to find out order of integration of all the potential variables before you construct the model.

  3. crazy bill Says:

    mikep – the good thing about “reality-based” sciences (as opposed to sciences with a more tenuous grip on reality, such as economics or political science) is that you can generally avoid creating models made simply on the basis of correlations or co-integration of variates. Instead, you can develop a model based on totally independent “fundamental science” – physics or nutrition or some such – and then predict the outcome based on said model; the observed time-series then becomes a confirmation of the model predictions and since it was not involved in the creation of the model it is a true confirmation.

  4. Bart Says:

    mikep,

    VS has been less than straightforward about what his stochastic trend estimate means, eg saying
    “a deterministic trend is inconsistent with a unit root”
    and then
    “it can contain a drift parameter, which indeed predicts a ‘deterministic’ rise in a certain period”
    It’s hard to chase through the lingo and get what he means. It seems that most people understand it to mean ‘no deterministic (read: CO2) forcing is necessarily at work’ and then he and others still try to defend the unphysical nature of that by saying ‘no, that’s not what he meant’. ‘Drift’ sounds different than determinstically forced’ though, and his stochastic model shows no tendency to either lower or higher values; ie it’s coincidental that the values went up. That’s unphysical as it means that no determistic forcing forced the values up or down, whereas in reality, there was.

    Plus, what Bill said. Physics based models trump nonsense correlations and nonsense stochastic/non deterministic ultra-wide-variance (aka anything goes) models.

  5. mikep Says:

    There is a well-established statistical literature on all this, and VS was perfectly clear to me. There are broadly three possibilities just looking at an individual times series that appears to be going up. First it might be best described as a deterministic trend: what that means is simply that the series should increase by the same amount each period plus an i.i.d error process. Second it might be best described by a stochastic trend, i.e. some function of its previous values plus and i.i.d. error process. Thirdly it could be best described by including both a deterministic trend and a stochastic trend. The tests VS did showed that the stochastic trend was the best description. But whichever description is best has no bearing on whether the series is caused in some way or other. All it means is that if some combination of your explanatory variables cannot produce the relevant description then you have problems. I am told that Model E, for example, produces a stochastic trend for global temperature. Well, good. It has then passed a minimum test of accuracy. This also presumably refutes the idea that a stochastic trend is unphysical (unless Model E is unphysical).

  6. dhogaza Says:

    First it might be best described as a deterministic trend: what that means is simply that the series should increase by the same amount each period plus an i.i.d error process.

    Increase by the same amount each period? You’re telling us that statistics only recognizes linear trends as being deterministic?

  7. Bart Says:

    Mikep,

    Aren’t you leaving one possibility out?

    Temp responds (with a time lag) to the net forcing and to the internal modes of variability such as ENSO, PDO, etc.

  8. mikep Says:

    Bart, this is a crucial point that does not seem to be getting across. Of course these things have to be looked at. But the approach VS was arguing for, one which is now absolutely standard in econometrics, is a two stage process. First you examine all the variables you want to include in your model to see what their descriptions are, whether they have unit roots etc. You look only at their own time series behaviour without introducing any explanatory variables apart from a possible deterministic trend and the variables own past values. That is just stage one. Then and only then you look for a co-integrating relationship between the variables, using the appropriate techniques when you know the order of integration of the relevant variables. So first we have to look at the times series properties of all the relevant forcings etc. Which is precisely what Beenstock and his co-author do. And then you look for the co-integrating relationship, if any exists, between the variables. that’s the stage where the forcings contribute to the temperature, not before. Unfortunately we never got on to that in the other thread because there was so much resistance to the purely descriptive stage one. But the net forcings cannot be part of the description of the time series properties of global temperature, they are a potential explanation, with their own behaviour which needs describing.

    Part of the problem seems to be that people have in mind a simple model where the trend is essentially standing in for CO2, and both temperature and CO2 go up. This may be OK as a sort of simple short-hand model, but it is too over-simplified to be taken seriously. Apart from the fact that the simple trend is not a good (in the statistical sense) description either of temperature or of CO2 there is also the fact that temp depends on other factors than CO2 which therefore need to be included to get accurate answers about the effect of CO2. Omitted variables cause biased estimates of effects.

  9. mikep Says:

    And one other point. The whole co-integration literature grew from the examination of nonsense correlations or spurious regressions. the classic reference is Granger and Newbold 1975 (the same Granger who got the Nobel Prize for economics for his co-integration work). They showed that regressions of two random walks on each other exhibited high R2 and “significant” t statistics most of the time in a Monte Carlo study, even though by construction they were not related at all. The point is that if your variables are not stationary it’s quite easy to generate spurious regressions. Beenstock indeed argues that the relationship between the level of CO2 and temperature is spurious in precisely this sense. Co-integration is precisely a way of testing to see if your model does a good job of explaining the time series, contrary to what Crazy Bill suggests.

  10. Bart Says:

    mikep,

    You wrote:

    First you examine all the variables you want to include in your model to see what their descriptions are, whether they have unit roots etc. You look only at their own time series behaviour without introducing any explanatory variables apart from a possible deterministic trend and the variables own past values.

    I’m still confused here. VS has stated that

    almost all test equations include a trend term.

    The choice of trend term influences the test result for unit root (at least that’s my understanding; correct me if I’m wrong). If there *is* a trend and you don’t account for it in the unit root test, you can erroneously conclude the presence of a unit root. It it thus important to account for the real underlying trend. Do I understand this correctly?

    VS has tested the GISS record using a linear trend, CO2 concentrations and CO2 forcings as the trend term (again, correct me if I’m wrong) and concluded the likely presence of a (near) unit root. He has not tested the GISS record using the lagged net forcing or similar as the trend term, whereas that is the phsycially expected underlying deterministic trend. Therefore, I’m not convinced that the presence of a (near) unit root is established beyond reasonable doubt.

    My resistance however is not against unit root per se, but more so on some conclusions drawn from it. Does the presenc of a unit root exclude the possibility of a forced (deterministic) tendency of the data to behave in a certain way? (not necessarily going up linearly of course, but rather, responding to the lagged net forcing).

    Also, VS’s comparison between his stochastic (‘anything goes within very wide bounds’) and deterministic (linear increase) model specification was in ho entirely meaningless. The former basically implies an equal chance of temps going up or down (unphysical in light of a forcing acting on the system). The fact that the model can’t yet be falsified is meaningless because its bounds are so wide. And the latter is also unphysical because why would the temp increase linearly when the forcing don’t? Its falsification is no surprise. No conclusions can be drawn from these model specifications, and I indeed resist when people try to make far reaching conclusions nevertheless.

    I took Crazy Bill’s point to be that the alternative approach, a physics based model, is continuously left out, whereas people do try to make conclusions about the workings of the physical system. That seems odd.

  11. crazy bill Says:

    The problem in economics is that there are no real “mechanistic” models of the economic system. Sure there are some differential equations that supposedly model the dynamics of the system, but since the system is so ill-posed, ultimately you’re forced into data fitting and testing your model against essentially the same data. So it’s no wonder that the work on co-integration issues was of “Nobel” significance. Contrast that with a physics-based simulation, where the model is based on physical laws and properties that can generally be measured totally independently of the system being modelled. The model testing – comparison of the measured vs simulated output – is therefore completely independent of the model construction. Issues of “co-integration” leading to false correlation are totally irrelevant.

    Another point of course is the physical nature of temperature (or weight) compared to the immaterial nature of money. Rising temperature implies that there are measurable quantities of heat being accumulated in the system. The laws of physics say that is only possible if there is some (physical) mechanism to drive that heat and hold it within the system. We are thus compelled to seek a physics-based model to explain that measured increasing “drift”.

    By contrast, many economic measurements (eg prices, money supply) can in theory increase without limit because the numbers themselves are arbitrary creations of the economic system itself. There may be some theory about “bubbles” or “prices” that can be used to “explain” why the quantity is moving up and up (or down) but as we all know now, nobody knows (and some know less than others). You really do need to be careful about drawing false conclusions from correlating M with P.

  12. mikep Says:

    Bart,

    I think you are wrong about what VS did. All his tests were concerned only with the behaviour of a single series in terms of its won past values plus, as all standard unit root tests can do, a deterministic trend possibility, which was rejected. So I entirely agree with your first substantive para beginnig “the choice of trend…”

    But I disagree with your second paragraph. He did not use a trend based on CO2. The reason is that we did not get as far as the co-integration analysis which is where we look for a relationship with the forcings. You are conflating the two stages of the process I described above. For a good introduction to these issues see the on-line resources for Christopher Dougherty’s introductory textbook. The link to the material for Chapter 13, which deals with non-stationary time series, is here

    http://www.oup.com/uk/orc/bin/9780199280964/01student/ppts/ch13/

    Your third paragraph I also disagree with. The various tests show that the behaviour of temperature has a unit root. On its own, as I keep repeating, this is just a description of the data generating process. Unit roots in a time series do not mean that the variable cannot be explained except by chance. Consider the example of the drunk and her dog, referred to on the other thread.

    http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Context/Co-integration/Murray93DrunkAndDog.pdf

    Both left to themselves follow a random walk, the simplest kind of unit root behaviour. But the dog can still be found near its owner because it responds to her whistle. So this is the case where the variables are co-integrated even though they both follow a random walk. Co-integration tells us how to avoid the problems in correlating non-stationary time series, and distinguish those random walks (and other unit root processes) that are really unrelated despite the R2 and t stats, from those that are related.

    Your penultimate paragraph does not seem to me to the point. If the linear trend is just a proxy for CO2, then, in the co-integration analysis CO2 is what should be there. In fact the whole point of Beenstock’s analysis is that CO2 forcing in integrated of order 2, i.e. needs to be differenced twice to become stationary. Taking proper account of this is, in his opinion, why his results differ from those of Kaufmann and his various co-authors. But VS never got to the co-integration analysis.

    VS’s tests are not about causation – we never got that far – all they are about are the time series behaviour of the individual series. After all, remember that Model E is supposed to produce unit root behaviour in the temperature series. This surely means that unit root behaviour can be the results of a physical model.

    I think I need a separate post on physical models, this one is too long already.

  13. mikep Says:

    I promised another post on models. There are broadly two approaches to modelling. Caricaturing a bit, the first says we know all about the relevant pieces of science/economics before we start. So what we do is assemble them together and produce results. This works brilliantly e.g for predicting paths of space probes. Caricaturing again, the second says, we know what variables might be in the model, but we are not sure which subset to include or the exact form of the relationship. Therefore we estimate the model statistically and judge according to the correspondence of the estimated model to the observed outcomes.

    Contrary to what some people seem to think both approaches are followed in economics. The most prominent examples (ironically perhaps associated with right wing free market approaches) are the so-called Real Business Cycle models. These are not estimated, but the parameters are chosen on the basis of prior evidence about relationships, rather than estimated as part of the modelling process. Note also that estimated models are not pure statistical models: the basic types of relationship (e.g. downward sloping demand curves) are derived from economic theory.

    The first method is ideal if we do indeed already know most of what needs to be known. This does not seem to me to be true for climate. There are plenty of unknowns in terms of feedbacks e.g from clouds etc. Indeed the fact that there are several different climate models extant shows that that there is no unique way of building one from current knowledge.

    The estimated approach is not unphysical. It just recognises that there is plenty of wiggle-room between what we already know pretty much for sure and the exact relationships in the real world. Let us say that the estimated model approach gives some unexpected result. then surely what we do is investigate and see if we can suggest reasons for this unexpected behaviour and try to work out what is going on. It might turn out the data are poor, it might turn out that some feedbacks were not working in the way we expect. But just because the model is estimated does not mean it is unphysical.

  14. Bart Says:

    mikep,

    the behaviour of a single series in terms of its won past values plus, as all standard unit root tests can do, a deterministic trend possibility, which was rejected.

    I think he used a linear trend, CO2 concentration, and CO2 forcing as determinsitic trend estimates. Do you think he didn’t?
    I think what should be used instead is the (lagged) net forcing. Do you think it shouldn’t?

    A linear trend is not a good proxy for CO2, and CO2 is not even a good proxy for the net forcing (except perhaps over smaller time intervals, such as the past 30 years, where changes in other forcings are minor in comparison).

    A lot has been made out of VS’s stochastic vs deterministic model, and I think it’s mostly uncalled for. It purports to show that temps merely went up by chance. That’s very much the crux of the discussion.

  15. dhogaza Says:

    Part of the problem seems to be that people have in mind a simple model where the trend is essentially standing in for CO2, and both temperature and CO2 go up. This may be OK as a sort of simple short-hand model, but it is too over-simplified to be taken seriously.

    No, this is a strawman, no one seriously takes this view. Unless, of course, it was VS, because as Bart points out:

    I think he used a linear trend, CO2 concentration, and CO2 forcing as determinsitic trend estimates. Do you think he didn’t?

    And from the beginning, people were arguing as Bart continues above:

    I think what should be used instead is the (lagged) net forcing. Do you think it shouldn’t?

    CO2 forcing can’t be taken in isolation. Suggesting that those of us who pointed that out some thousand-plus comments ago are guilty of “having in mind a simple model where the trend is essentially standing in for CO2″ is, shall we say in all kindness, a thoroughly-complete misread of the arguments that were made.

    No one is surprised that there’s no statistically significant correlation between the rise in CO2 and temperature prior to the last few decades. Climate scientists were pointing that out 35 to 40 years ago. They were arguing about *when* the signal would emerge, *when* net forcing would be dominated by increased forcing from CO2 due to increased concentrations of CO2 in the atmosphere.

    The problem isn’t with VS’s statistical observations, the problem is with his unwarranted unphysical conclusions.

  16. dhogaza Says:

    They were arguing about *when* the signal would emerge, *when* net forcing would be dominated by increased forcing from CO2 due to increased concentrations of CO2 in the atmosphere.

    It’s also worth pointing out that we’ve had some fairly ideal conditions the last few decades in the sense that TSI flattened (indeed, has dropped some the last decade or so), SO2 and other industrial aerosols have been to a large extent scrubbed from the air in the developed world (the major source), there’s only been one large volcanic event (and models did a good job of predicting the impact of that event), etc etc.

    There are sound physical reasons for the hypothesis that qualitatively things have changed in recent decades, and that CO2 has been dominating the net change in forcing over this period of time.

    Since Mikep claims to understand the statistics well (though his conflating of deterministic and linear trends still seems odd to me), perhaps he’s up to the challenge of showing us where B&R went wrong in their statistical analysis that led them to the physically impossible claim that 1 w/m^2 forcing from the sun leads to 3x the warming of 1 w/m^w forcing from CO2? Or why CO2 “loses its punch” shortly after being emitted into the atmosphere (apparently it ceases absorbing LW radiation effectively or some such, if statistics is really capable of overturning physics)?

    I haven’t seen VS or his supporters tackle this basic problem with B&R.

  17. mikep Says:

    I can find no evidence that VS included CO2 in his unit root tests. Point me to the example if there is one. It would be an incorrect way of doing the FIRST STAGE of the analysis of non-stationary series. It’s at the second stage you look for co-integrating relationships and that is where CO2 should be introduced.

    Can we all agree that in terms of standard unit roots tests, looking only at the behaviour of individual times series in relation to their own past values and a possible trend that the preponderance of the evidence is that global temperature is best described by a unit root process. We may be able then to move on to consider what might explain the behaviour of global temperature, which might include the level of CO2.

    As for the liner trend model being a straw man, what else is the OLS estimate of trend?

  18. Bart Says:

    mikep,

    …looking only at the behaviour of individual times series in relation to their own past values and a possible trend…

    Hence my question again: What possible trend? If in the unit root test you can (or indeed, should) account for the presence of an underlying trend (a point with whihc you agreed a few comments back?), than why can this only be a linear trend? Why not one that is more realistic?

  19. crazy bill Says:

    Can we all agree that in terms of standard unit roots tests, looking only at the behaviour of individual times series in relation to their own past values and a possible trend that the preponderance of the evidence is that global temperature is best described by a unit root process.

    Actually I’d have to disagree with this. Straightforward tests of the temperature time series including a trend reject the hypothesis of a unit test

    Indeed, if you’re worried about a unit root, the implication is that the process generating the time series has feedback loops that are close to being unstable. It’s clear enough how this kind of integrating behaviour can occur in econometric time series, but not so clear how it can explain the recent trends in the temperature series given that temperature (heat) doesn’t just accumulate by itself.

  20. dhogaza Says:

    mikep says:

    It’s at the second stage you look for co-integrating relationships and that is where CO2 should be introduced.

    But one doesn’t expect a co-integrating relationship between temps and CO2 concentration alone when:

    1. CO2 concentration is low and change during the period of time minimal in terms of forcing.

    2. Other forcings – notably solar TSI – during the time frame are known to have grown significantly (in terms of the expected physics-predicted effect on temps).

    3. etc (exercise left to the economist who deigns to ignore physics)

    The expected lack of such a relationship was known to climate science, as I said, 40-50 years ago, based purely on physics.

    I didn’t see VS, for instance, introduce variance in solar TSI into his analysis, aerosols, etc. I admit I haven’t followed the entire thread, so perhaps his approach became more physically realistic, I missed it, and indeed he *has* overturned physics.

    But I rather doubt it. If I’m wrong, show me where he’s taken into account all of the physical variables in his statistical analysis.

  21. dhogaza Says:

    Can we all agree that in terms of standard unit roots tests, looking only at the behaviour of individual times series in relation to their own past values and a possible trend that the preponderance of the evidence is that global temperature is best described by a unit root process

    Over what period of time? And why do you chose that time frame? And why do you think it matters?

    Interestingly, David Stern, who recently published an interesting paper in Nature, has had some things to say about the issue. VS claims to have hand-waved him into the dustbin of incompetence (sans credentials, but with a lot of chutzpah leading me to believe he’s a grad student in economics rather than a published professional – his unwillingness to tell us his professional chops supports my hypothesis) just as he did the working professional statistician Grant Foster (who VS claims doesn’t even understand first-year stats, a real hoot of a false claim).

    David Stern in one post said this:

    Certainly the time series properties of all these variables need to be taken into account when testing for trends, modeling etc. I’ve used methods that seem to me to be plausible approaches. There may be other plausible approaches and arguments. Simple linear regression methods that don’t take into the potential problems are certainly hazardous. But I wouldn’t be too dogmatic about what the best approach is.

    And here we are. Where has VS taken into account the properties of known changes in TSI during his cherry-picked instrumental temperature record? SO2 and other cooling aerosols? Volcanic activity?

    (phsyics-based model experiments are made twisting all those knobs, including to extremes such as are seen in paleoclimate – the experiments aren’t limited to the very short period of instrumental temperature records)

    He justifies using that period of time because he says “we must use all of the data”, by which he means 1880-present.

    Odd … we could easily say “the satellite temp reconstructions are far more accurate” – (once upon a time a common denialist claim, until even UAH began to show warming that couldn’t be ignored, but that’s another story).

    So … why doesn’t VS limit his analysis to the “more accurate” 1979-present satellite record?

    Is he afraid of what such a statistical analysis might show?

    And … if it doesn’t show a unit root … or stochastic behavior … or shows a strong correlation with CO2 increases … what then?

    Which is more accurate? The longer-term cherry-picked surface station record which denialists apparently want to claim is both 1) too inaccurate to be useful and 2) shows temp follows a random walk, or the shorter satellite record?

    Or long, long term paleoclimate reconstructions with all their uncertainties?

    How is VS’s choice of dataset not an a priori bias? How do we not know that he didn’t sift through a series of datasets covering various timeframes and didn’t cherry-pick the series that fits his argument?

    And what is it with your earlier claim that only a linear trend is considered deterministic? (do you need me to do the algebra for you?)

  22. dhogaza Says:

    Perhaps we can all agree that we’re thankful that Max Anacker and the rest over at that other thread haven’t found this one yet?

  23. DLM Says:

    mikep said: “I can find no evidence that VS included CO2 in his unit root tests. Point me to the example if there is one. It would be an incorrect way of doing the FIRST STAGE of the analysis of non-stationary series. It’s at the second stage you look for co-integrating relationships and that is where CO2 should be introduced.”

    You are correct. VS did not include CO2 unit root tests in what was his clearly stated FIRST STAGE analysis, which only included the character of the temp series. I know about as much about statistics as does sod and doghaza , but I do get where VS was going. The confused here are stuck on the concept that the relatively short temperature time series under analysis appears to be essentially random, statistically speaking. As crazy bill says, “temperature (heat) doesn’t just accumulate by itself.” Yes, everybody knows that something is causing temperature to vary, and the popular assumption here is that it just has to be CO2. Therefore, the idea that you can’t legitimately use OLS to find a trend that coincides with increasing CO2 is not useful and does not make them happy. They fear the next stage will produce similar disappointing results.

    This reminds me of the differences between the likes of the finance and investment geniuses like Eugene Fama, Burton Malkiel and the so-called TA (technical analysis) crowd. The former are of Random Walk and Efficient Markets fame, while the latter have convinced themselves that stock price movements are not essentially random,-hey, something is moving prices- and that they can find trends in past stock price movements, that can be exploited for profit. And if you are determined to find trends, you can find them, . Data snooping sometimes helps.

    Warren Buffet said that he realized TA was BS, when he turned the stock price charts upside-down, and got the same answer. Buffet’s mentor, Ben Graham, author of the Bible of investing, “Security Analysis”, famously described the TA stuff, as being ‘as fallacious as it is popular’.

    Of course, these guys here would claim that they are the geniuses and VS is the TA practitioner. They ain’t gonna see it your way. They have to hold the line.

    PS
    The financial geniuses don’t know everything either. Markets are all that efficient. Otherwise a dummy like me couldn’t make so much money. For example: I recently took on very large positions in two little penny stock, fly-by-night biotechs that the market may have overlooked. HDVY.OB POSC.OB Now don’t buy them. I am a dummy.

    Now I am going to go over to the other thread and tell VS’s crew where you all are hiding.

  24. dhogaza Says:

    DLM dumps a load …

    The confused here are stuck on the concept that the relatively short temperature time series under analysis appears to be essentially random, statistically speaking.

    While, of course, when analyzed over other periods of time, such as the period of time during which physics informs us that CO2 dominates the net change in forcing, it does not appear to be essentially random.

    VS’s response to that was “why do you not use all the data?” as though the GISS 1880-present period represents all of climate history.

    As I point out above, why that record? Why not the satellite temp record, or the much longer paleoclimate reconstructions?

    VS others of cherry-picking while he himself has, of course, cherry-picked.

    As crazy bill says, “temperature (heat) doesn’t just accumulate by itself.” Yes, everybody knows that something is causing temperature to vary, and the popular assumption here is that it just has to be CO2.

    Wrong. Climate scientists and those who understand the science know it’s a combination of a variety of postive and negative forcings – GHGs, TSI, sulfate aerosols, stratospheric injections of aerosols by volcanoes, milankovich cycles, etc overlaying a great deal of variability (noise) in the system.

    These can be accounted for. When accounted for, you get physics-based models that work much like the observed climate system.

    It is VS who ignores the existence of other forcing factors and the need to account for them, at minimum by using lagged net forcing, as Bart suggests.

    Therefore, the idea that you can’t legitimately use OLS to find a trend that coincides with increasing CO2 is not useful and does not make them happy.

    No one suggests – and this includes professional statisticians like Tamino – that you can use OLS over the entire climatic record of Planet Earth. However, at times trends and processes do closely match linear models, and much can be learned from applying them.

    In other words, another strawman argument from another denialist, big surprise.

    They fear the next stage will produce similar disappointing results.

    There’s no “disappointment” due to naive statistical analyses leading to unphysical results that simply can’t be correct (such as the conclusions of B&R). Trust me, science is unaffected. Physics textbooks will not be rewritten.

    My sarcastic and cynical side says the reputation of econometrics isn’t affected, either …

    Statistics is a tool, nothing more. VS and some others seem to be forgetting this.

  25. Ohio Says:

    I think one of the keys to understanding what VS did is that this data is time-series data. VS simply said the way in which you apply statistics to time-series data is different from data generated by random samples or experiments. I don’t think there is anything controverisal about that. His tests indicated the presence of a unit root which then requires co-integration to properly evaluate the causes of the temperature variation. He never did his own co-integration analysis, although it sounds like other studies may have been done already? The controversial part of his work was his pronouncement that the temperature variation observed could totally be explained as random variation. Or in other words, that whatever deterministic factors (like CO2) are at work, their effect was not enough to cause the temperature signal to leave the bounds predicted by randrom variation. While the temperature signal did not exceed the 95% limit, it did exceed a 90% limit. This leads me to believe that a CO2 effect would probably appear in co-integration.

  26. dhogaza Says:

    I think one of the keys to understanding what VS did is that this data is time-series data.

    Everyone realizes that. In particular, Tamino/Grant Foster realizes that – time series analysis is his *profession*, his day job.

    And, of course, VS isn’t the only statistician to look at the data. And Tamino isn’t the only statistician to look at it and to reach a conclusion different than VS’s (David Stern, who authored a paper in Nature and posted in the extremely long thread, for instance).

    No, that’s not the problem.

    Besides:

    VS simply said the way in which you apply statistics to time-series data is different from data generated by random samples or experiments

    1. It’s easy to generate random data that has whatever auto-correlation structure the real time series you’re comparing with has.

    2. Most experiments take place over time, meaning they yield … you can fill in the blanks.

  27. DLM Says:

    doghaza,

    You need to calm down. You are not making sense, and Bart doesn’t like the name-calling.

    You simply do not understand what VS has done, and was intending to do, until he became frustrated by the continuing displays of willful ignorance and open hostility, such as yours.

    Ohio, in his mid-western common sense explanation, has hit the nail on the head. That is the whole story of VS’s discussion. VS didn’t attempt to prove anything other than the temp series has a unit root. It is his opinion, based on his interpretation of a Nobel Prize winning statistical method, that further analysis requires the use of that co-integration stuff. Why couldn’t you just admit that he might have a point, and allow the discussion to go forward? What are you afraid of?

    As Ohio very correctly points out, the CO2 effect could very well show up in the co-integration analysis. Contrary to your silly strawman accustation that all of us denialists don’t believe that increased CO2 causes some warming, I would be surprised if some CO2-temperature correlation doesn’t show up. Are you afraid it won’t be as strong as you would like it to be?

    You said; “While, of course, when analyzed over other periods of time, such as the period of time during which physics informs us that CO2 dominates the net change in forcing, it does not appear to be essentially random…. However, at times trends and processes do closely match linear models, and much can be learned from applying them.”

    So why don’t you cherry pick the best of those time periods, and tell us where the missing heat is? Kevin ‘Travesty’ Trenberth says half the heat has gone missing. Can you help him? Here is an interesting overview:

    http://theresilientearth.com/?q=content/missing-heat-hides-climate-scientists

    You are correct about one thing, whatever is discussed on some obscure blog is not likely to overturn any established science. The problem you have is that your catastrophic AGW science is not well established. If it were, half the *&^%$#g heat wouldn’t be missing.

    Ohio,
    I am from Cincinnati. Moved to California to escape the nasty winters. But I am planning to move back to Cincy , as it is getting very balmy there now. Right? And I am really worried about the rising sea level here.

  28. Ohio Says:

    dhogaza,
    Most experiments in climatology take place over time. Most experiments in physics and engineering take place in a controlled experiment where time is not an independent variable. And when I said random samples, I was referring to the engineering / manufacturing concept of taking random samples of a population to generate statistics about the population. So in other words, the process isn’t random, but the sampling pattern is. The statistical techniques applied to controlled experiments and sampling are very different than the techniques applied to time-series or naturally-occuring data. As you said, everyone understands this.

    Please also keep in mind that VS’s original comment was in response to a post of Bart’s applying OLS trending and confidence intervals to time-series temperature data. VS demonstrated that those techniques are not appropriate to use on time-series data with a unit root.

    I don’t think VS proved anything more than that.

    DLM,
    I’m from Cleveland so we have our own microclimate with the lake effect. This past winter we had snow on the ground for pretty much two months straight, so I am ready for spring.

  29. Hank Roberts Says:

    You link to a blog that spins and adds claims that aren’t in the Trenberth document to make it sound like it’s discussing a failure in the models or theory or observations to account for where the heat is going.

    ‘Resilientearth’ adds stuff like this, that’s NOT in the Trenberth doc:
    “Earth’s surface temperatures have largely leveled off in recent years.”

    Don’t be fooled. Read the original for yourself:

    http://www.sciencemag.org/cgi/content/summary/328/5976/316

    “Existing observing systems can measure all the required quantities, but it nevertheless remains a challenge to obtain closure of the energy budget. This inability to properly track energy—due to either inadequate measurement accuracy or inadequate data processing—has implications for understanding and predicting future climate.”

    This is straightforward, we know we need more data from the oceans; the structure of the ocean circulation is only beginning to show up as more and more instruments are added. Not a problem, an opportunity — unless there’s more delay getting the research done.

    Who’s in favor of delay?

  30. Hank Roberts Says:

    PS, examples:

    This is a 1-degree grid compared to a 1/5-degree grid resolution.
    Imagine getting a thermometer into each grid cell in each condition, eh?

    Well, it can be done, and it’s happening, finally:

    http://www.hydro-international.com/issues/articles/id958-Slocum_Gliders_Make_Historic_Voyages.html

    “… Gliders, as currently configured, were first detailed in Webb’s lab notebook on 2 February 1986 as a novel instrument approach. It has taken some time to bring the concepts he noted then to reality, yet gliders are steadily making their place in the world as high-endurance sensor platforms. More importantly, this class of long-range and relatively low-cost autonomous underwater vehicle (AUV) is creating the potential for an affordable, adaptive sampling network that has the potential to substantially increase our knowledge of the world’s oceans….”

    http://spray.ucsd.edu/pub/rel/info/spray_description.php

    Detailed resolution:

    Look up the Slocum Glider science fiction short story from 20 years ago (you know how to find this stuff) for a real treat. Current technology isn’t that good yet, but they’re getting closer.

    See, this is what you miss reading denial sites instead of science sites.

    Or look at the current science:

    http://www.gfdl.noaa.gov/bibliography/related_files/bfk0801.pdf

    The models are just beginning to be able to crunch the volume of numbers we have been getting from existing instruments, and the instrument people are getting more and smarter instruments into the ocean steadily. It’s a race to be smart enough to manage the planet while it’s changing very fast.

    “Delay is the deadliest form of denial.”

  31. DLM Says:

    Hank,

    [edit] I described the link I provided as an interesting overview, period. I didn’t represent it as Kevin ‘It’s a travesty” Kenberth’s viewpoint, and neither did the authors. They provided a link to Trenberth’s goofy paper, as well as to NCAR. I looked at both. Why do you people get so @#$%^&g indignant, when someone expresses an opinion?

    Let me help you. The parts that were not quoted as having been said by Trenberth, or somebody else, are the opinions/interpretations of the authors.

    Now this revealing bit of settled climate science is priceless: “Existing observing systems can measure all the required quantities, but it nevertheless remains a challenge to obtain closure of the energy budget. This inability to properly track energy—due to either inadequate measurement accuracy or inadequate data processing—has implications for understanding and predicting future climate.”

    It also has obvious implications for understanding the competency of the climate scientists.

    They claim to be able to measure all the inputs and outputs of the ‘energy balance’ down to about a tenth of a degree, but somehow half of the heat that is supposed to be here, is MISSING!

    And that foolishness is the best climate scientists can do, after decades of research that has cost taxpayers 70 billion dollars.

    What if your bank told you that they had all of your deposits and withdrawals recorded correctly, but half of your money was MISSING?

    No, I am not going to look up a science fiction story from twenty years ago. The contemporary tale “The Travesty of the Hidden Heat”, By K. Trenberth et al, will keep me entertained for quite a while.

    You will have to ask the feckless carbon spewing apparatchiks, who breezed through Copenhagen a couple of months ago, about that delay thing. My guess is, they are just not that worried.

  32. Al Tekhasski Says:

    Crazy bill said: “Rising temperature implies that there are measurable quantities of heat being accumulated in the system. The laws of physics say that is only possible if there is some (physical) mechanism to drive that heat and hold it within the system.”

    Sorry but I have to repeat again, fifth time on this blog: the Earth system is a system far from thermodynamic equilibrium, it has substantial spatial inhomogeneity. Therefore it cannot have temperature in its normal physical meaning. In sharp contrast with physics of thermodynamically-equilibrated systems, rising GLOBAL AVERAGE temperature implies no such thing as “heat is being accumulated”. Given that the steady state of Earth energy is defined by radiation, the T^4 dependence allows for wide range of different global temperature averages to have identical radiative balance. Likewise, global temperatures can go down while heat could be accumulating, or global average could go up while Earth might be losing heat.

    An example of four different temperature distributions was given here

    http://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1962

    These distributions are energetically identical, yet their global average differs by 20K.

    I totally agree with what DLM said about competency of certain “scientific” circles. Please stop repeating scientifically wrong statements.

  33. Heiko Gerhauser Says:

    Hi Bart,

    I think what can be said is that it is difficult to check a model of how calorific intake impacts your weight, when all we’ve got is that data series, first principles and poor knowledge of many underlying factors, such as say how much of the weight increase has been masked by you taking up sports.

    On the random vs deterministic thing: Take throwing coins. Something decides which way the coin ends up. If we knew those factors, we could use physical models to do the calcs. If we don’t, but we have data for a 1000 coin throws and want to make a prediction for the next thousand, we won’t make recourse to physical laws, we’ll make recourse to statistics.

    In the case of climate, if say cloud albedo or algae populations and therefore ocean albedo changed, well that would be a forcing, but if we cannot readily predict these changes, yet capture them well with probabilities, like for the coins we might be better off with a statistical model than a pure physical one.

    Now for climate not to be impacted by CO2 or aerosol or solar forcing changes, and only to be impacted by internal variability of cloud cover or biological systems and the like, so that the climate outcome probability distribution is exactly the same no matter whether we make the sun 10% brighter or inject 1000 ppm worth of CO2, or inject lots of sulphur into the atmosphere, requires the world to be a perfect thermostat with respect to these forcings.

    As I understand them, the climate models do give different outcomes for slightly different initial conditions. So, there is indeed a random element. The difficult bit is in using the data we’ve got to determine whether the models get the internal variability right. To me it’s clear that the data series is too short and the forcings too poorly known to pin down how accurately the models reflect internal variability of the system.

    I also think that statistics would help with that, if we had really good temperature data for a very long time period with constant or near constant forcings (say the last 10000 years with a resolution of 1 year or better and the error margin being of the order of 0.1C rather than being of the order of 1C).

  34. Heiko Gerhauser Says:

    Hi Bart,

    let me add one more thought. You focus very much on random variability as an alternative to CO2 forcing. But, we can also look at a stable climate period without forcings and then look for a good model that reflects internal variability.

    I’ve now extended the spreadsheet I did to 10000 years, adding rand()/10+0.05 every year for 10000 years.

    It’s far from the case that the temperature then automatically runs away to unrealistic values. I’ve done a few permutations, one had the temperature anomaly after 10000 years at +0.49C. with the maximum achieved over the 10000 years +1.2C and the minimum -1.95C.

    This is within the range of what proxies tell us about past climate, so this simple statistical model might actually be quite good at representing internal, unforced variability.

  35. Heiko Gerhauser Says:

    oops, it’s -0.05 in my spreadsheet of course, not +0.05.

  36. dhogaza Says:

    DLM doesn’t give up …

    Now this revealing bit of settled climate science is priceless: “Existing observing systems can measure all the required quantities, but it nevertheless remains a challenge to obtain closure of the energy budget. This inability to properly track energy—due to either inadequate measurement accuracy or inadequate data processing—has implications for understanding and predicting future climate.”

    Fair enough quote …

    They claim to be able to measure all the inputs and outputs of the ‘energy balance’ down to about a tenth of a degree, but somehow half of the heat that is supposed to be here, is MISSING!

    Well, no, the point is that there’s instrumentation out there but there have been a lot of problems getting reliable and accurate data from some of them. Frustration is being expressed because of this. I bolded/italicized the part of the quote that makes the point that you have clearly missed. The point that makes clear that it’s not claimed that “all inputs and outputs can be measured to about a 10th of a degree”, but rather the opposite.

    It also has obvious implications for understanding the competency of the climate scientists.

    Climate scientists don’t make the instruments. In many cases, the instruments weren’t designed for climate work in the first place. The historical surface temperature network was established to help weather forecasters, and for weather forecasting things like trees growing slowly and over decades eventually shading a thermometer doesn’t matter. Weather forecasters are interested in temps over a very short period of time. Likewise the MSU sensors on WEATHER satellites, they weren’t designed to yield consistent long-term data. The fact that sensors on each successive satellite gives slightly different results doesn’t matter for their WEATHER mission, it only matters for climate scientists trying to use the data for something it wasn’t designed to yield.

    How stuff like this reflects badly on climate scientists baffles me.

    Perhaps you meant it in a positive sense. That’s how I feel. The fact that climate scientists have build robust temperature products using data not collected with climate research and analysis in mind is damned impressive, when you think about it.

    Now how about you go off and try not to misunderstand and not misrepresent things for a change?

  37. dhogaza Says:

    They provided a link to Trenberth’s goofy paper

    And there’s nothing goofy about the paper at all.

    We get indignant due to your constant misrepresentation of Trenberth’s work and the implications.

    He got indignant too, in public. Pissed him off. I don’t blame him.

  38. Anonymous Says:

    doghaza,

    Get serious.

    “Well, no, the point is that there’s instrumentation out there but there have been a lot of problems getting reliable and accurate data from some of them.”

    Look at the so-called ‘Global Energy Flow’ charts and tell me they aren’t claiming to measure such things as-Outgoing Longwave 238.5, Incoming Solar 341.3, Net Absorbed 0.9-down to one tenth of a @#$%^&g Watt per square meter.

    And this is really hilarious, after $70 billion has been spent on this foolishness: “Climate scientists don’t make the instruments.”

    If the instruments are not accurate, why do the poor misunderstood and misrepresented climate scientists pretend to know, down to one tenth of one degree, what the ‘energy budget’ for the whole @#$%^&g world is? Don’t you think that is just a little bit arrogant, given that they can’t find the MISSING HEAT?

    But we have learned that, when discussing this settled science amongst themselves, the poor misunderstood and misrepresented eminent climate scientists ain’t so smug [edit] of themselves:

    http://junkscience.com/FOIA/mail/1255523796.txt

    Yes that’s right, junkscience.

    You are in deep denial, doghaza.

  39. DLM Says:

    What happened Bart? Too hot for you?

  40. DLM Says:

    doghaza,

    I replied to your foolishness with a post that apparent didn’t make it through Bart’s screening. Anyway, it was a rehash of a rehash, as is most of the other foolishness posted here. I am out. Have fun.

  41. DLM Says:

    Sorry Bart. It seemed out of character for you to bleep me completely. Somehow I became Anonymous for that post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 128 other followers

%d bloggers like this: