The two epochs of Marcott

by

Guest post by Jos Hagelaars. Dutch version is here.

The big picture (or as some call it: the Wheelchair): Global average temperature since the last ice age (20,000 BC) up to the not-too distant future (2100) under a middle-of-the-road emission scenario.

Shakun_Marcott_HadCRUT4_A1B_Eng

Figure 1: The temperature reconstruction of Shakun et al (green – shifted manually by 0.25 degrees), of Marcott et al (blue), combined with the instrumental period data from HadCRUT4 (red) and the model average of IPCC projections for the A1B scenario up to 2100 (orange).

Earlier this month an article was published in Science about a temperature reconstruction regarding the past 11,000 years. The lead author is Shaun Marcott from Oregon State University and the second author Jeremy Shakun, who may be familiar from the interesting study that was published last year on the relationship between CO2 and temperature during the last deglaciation. The temperature reconstruction of Marcott is the first one that covers the entire period of the Holocene. Naturally this reconstruction is not  perfect, and some details will probably change in the future. A normal part of the scientific process.

The temperature reconstruction ends mid-20th century, so the rapid temperature rise since 1850 is clearly visible in the graphs presented in their study.
And what do we see? Again something that looks like a hockey stick as in the graph from Mann et al 2008.

Hockeystick-Marcott_Mann2008

Figure 2: The temperature reconstruction of Marcott 2013 (past 11,000 years) and a collection of reconstructions (past 1800 years) as presented by Mann 2008.

Are the results from Marcott et al surprising?
Not really. The well-known graph of Holocene temperature variations on Global Warming Art, which is often encountered on the internet, is actually a comparable image. One could say that Marcott et al managed to scientifically confirm the average thick black line of the Global Warming Art image. See figure 3.

Holocene_Temperature_Variations_Marcott

Figure 3: Holocene temperature variations from Global Warming Art, with the average in black, combined with the reconstruction of Marcott 2013 in red.

Patterns in temperature reconstructions which resemble a hockey stick, are fervently contested on climate skeptic websites. The Marcott et al reconstruction is no exception. For example, it is hard to keep track of the number of posts WUWT dedicated to this study, and the statistical wonderboy McIntyre is also energetically producing blog posts. Otherwise the general public might get the impression that humans are strongly influencing the climate and apparently that is not a desirable impression.

The study of Marcott suggests that the earth is warming rapidly from a historical perspective, though the authors warn that the low time resolution of about 120 years and subsequent smoothing preclude a hard statement on whether it is truly unprecedented. The study is about the Holocene, the geological period of 11,700 years ago until now. From the main image of Marcott 2013 it can be deduced that after the last ice age, earth’s temperature has risen to about 7000 years ago, followed by a slow decline. The cause of the gradual cooling in recent millennia is a change in the distribution of solar radiation over the earth and over the seasons, known as Milankovitch cycles, which are responsible for the initiation and termination of ice ages.

After the year 1850, the influence of man-made emissions  is clearly visible in Marcott’s figure, an unprecedented increase in temperature in terms of speed over more than 100 years. The average temperature of the last decade was higher than the temperatures have been for 72% of the past 11,000 years:

Our results indicate that global mean temperature for the decade 2000–2009 has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.). These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies in the stack.

Epochs have a beginning and an end. From the main image of Marcott’s study you could deduce that, regarding climate, a new epoch has begun about 150 years ago. A clear break in the trend over the past 11,000 years. The end of the Holocene was reached in 1850 and the Anthropocene has started, the epoch in which man asserts its influence on climate. This leads to disbelief in certain parts of the population, which was predicted at the start of the Anthropocene by Charles Dickens, who wrote in 1859:

It was the best of times, it was the worst of times,
it was the age of wisdom, it was the age of foolishness,
it was the epoch of belief, it was the epoch of incredulity

Figure 1 at the beginning of the blog post clearly shows that mankind is creating a new world with a climate that human civilization has never encountered before. If human greenhouse gas emissions continue unabated, the temperature will go up even further. According to the IPCC 2007 A1B scenario we will probably have temperatures in the year 2100 that are about +3.0 degrees above the average of 1961-1990. The expected jump in the atmospheric temperature from 1850 to 2100 is of the same order of magnitude as the rise in temperature from the last ice age to the Holocene, as derived from the Shakun 2012 data. The difference is that current and future increase in temperature occurs orders of magnitude faster.

Marcott et al also refer to the climate model projections of IPCC 2007:

Climate models project that temperatures are likely to exceed the full distribution of Holocene warmth by 2100 for all versions of the temperature stack, regardless of the greenhouse gas emission scenario considered (excluding the year 2000 constant composition scenario, which has already been exceeded). By 2100, global average temperatures will probably be 5 to 12 standard deviations above the Holocene temperature mean for the A1B scenario based on our Standard5×5 plus high-frequency addition stack.

I.e. unprecedented, as many as 5 to 12 standard deviations above the mean of the temperatures in the Holocene. Welcome to the Anthropocene!

A famous SF series from past times always began with:
“To boldly go where no man has gone before”
Indeed, we are boldly entering a new epoch where no man has gone before. I have some doubts whether our descendants will be so delighted about it.

.
[UPDATE 31 March 2013]
A summary and FAQ related to the study by Marcott et al (2013, Science), prepared by the authors, can be found at RealClimate.

Tags: , , , , , , , , , , ,

117 Responses to “The two epochs of Marcott”

  1. Paul Kelly Says:

    Marcott is said to be preparing a FAQ to address McIntyre and other critics.

    One paragraph reads: : “… though the authors warn that the low time resolution of about 120 years and subsequent smoothing preclude a hard statement on whether it is truly unprecedented”

    Another reads: “… an unprecedented increase in temperature in terms of speed over more than 100 years”. Which is it?

  2. Jos Hagelaars Says:

    @Paul Kelly. Thanks, change indicated by strike-through.

  3. Bob Brand Says:

    In my opinion it would be correct to say: “… a probably unprecedented increase in temperature”.

    The authors conclude the article with:

    Strategies to better resolve the full range of global temperature variability during the Holocene, particularly with regard to decadal to centennial time scales, will require better chronologic constraints through increased dating control. Higher resolution sampling and improvements in proxy calibration also play an important role, but our analysis (fig. S18) suggests that improvements in chronology are most important. Better constraints on regional patterns will require more data sets from terrestrial archives and both marine and terrestrial records representing the mid-latitudes of the Southern Hemisphere and central Pacific.

    For an increase of ~ 0.8 °C to hide within the Holocene without being detected, it would have to be mirrored by a similar decrease. If this decrease would take place *immediately* after the increase, and would take about as long, we would be looking at a ‘blip’ of +0.8 °C over a period of ~ 200 years.

    Yes, that *might* have gone undetected although it doesn’t seem likely with a time resolution of about 120 years. It can’t have happened too often, or it would have been picked up in several of the proxies.

    This is under the rather improbable assumption the increase would be followed by a decrease of the same magnitude, immediately afterwards.

    It would be pretty hard to offer a physical explanation for such an event, which would *not* leave any trace in sediments or ice cores (an asteroid impact maybe, while large scale volcanism or a methane burp of this magnitude would certainly leave a signature).

    Therefore: “… probably unprecedented“, I’d say. :)

  4. toto Says:

    But the point is precisely that the current warming is *not* a blip. Even if CO2 emissions fall to 0 by 2100, temperatures will still remain high for at least a couple centuries.

    Any such event would have been caught by Marcott’s method. So although their method cannot exclude rapid blips of past warming faster than the current one, it nevertheless shows that the current event of fast, stable warming is indeed unprecedented.

    Unfortunately this genuine discussion is overwhelmed by noise about the unreliable, irrelevant final spike in the reconstruction, despite the authors repeatedly pointing out that it is probably not robust.

  5. Bob Brand Says:

    Toto,

    I agree completely. Well, with the one caveat that IF people (not me) would deny the consequences of extra GHG’s and posit some ‘magical effect’ that:

    * would occur at precisely the same time as the rise of GHG’s;

    * would exactly mimic the expected effect of extra GHG’s;

    * would also somehow cancel the radiative forcing of extra GHG’s;

    * while not being GHG’s;

    it might stand to reason such an effect would have occurred before. And since it took time to increase the heat content of the troposphere and the oceans, it would also take a similar amount of time to shed this extra heat. That’s why such an event would last at least about 2 * 100 years, supposedly.

    The noise about the irrelevant final spike completely overlooks what the authors stated prominently on the first page of the article:

    Without filling data gaps, our Standard 5×5 reconstruction (Fig. 1A) exhibits 0.6 °C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled 5° × 5° area-weighted mean stack (Fig. 1, C and D). However, considering the temporal resolution of our data set and the small number of records that cover this interval (Fig. 1G), this difference is probably not robust.

    Even after Marcott pointed this out in his letter, it is ignored.

  6. Eli Rabett Says:

    The point is, and we have Jos to thank for that, that we DO have a lot of higher resolution data for the past 200/300 years and that data shows an unprecedented rise.

  7. johnmashey Says:

    Bob: there are 2 separate magical effects needed for current warming:
    gremlins to fake the GHG effects
    leprechauns to nullify physics to make GHGs no longer work

    but we certainly agree: regardless of the statistical possibilities, any explanation for magic blips can contradict neither physics not other paleo data, and it’s really hard to imagine century-scale effects that can do that, both up and down to let them hide.

  8. intrepid_wanders Says:

    In all seriousness, and tribalism aside, can anyone produce a paleo-“record of some kind” that does not have. I have a few questions:

    CHALLENGES POSED BY DIVERGENCE
    1. Problem with curve-fitting e.g. Hugershoff (Briffa 1998) and trend distortion – part solution Signal free.
    2. Problem with mixing sloping and horizontal curve fitting in Arstan (e.g. D’Arrigo 2004) – part solution RCS.
    3. End effect problems with RCS (Briffa – Hughes book) – e.g. sample bias
    4. Problem with updating chronologies (TTHH and Grudd 2008, Tornetrask)
    5. Potential problem with Crown dieback (e.g. responders / non responders)
    6. Potential MXD in sapwood problem ????
    7. Potential competition problem – tree density changes RCS shape (Helama 2006)
    8. Problem with non-linear response / skewed index distribution (Barber, Wilmking etc)
    9. Remove all these and residual is real divergence – problem with identifying cause:
    CO2 change / Nitrogen fertilisation / Global dimming / UV light / Drought stress/

    Conclusion – Lots of work to do to clarify situation.

    Is there a place I can clarify these issues?

  9. Bob Brand Says:

    Mr. Intrepid,

    Proxies of the type used by Marcott et al. 2013 do *not* have the ‘divergence problems’ you suggest. They are calibration free proxies. Contrary to tree-rings it is not necessary to calibrate them against the instrumental record from 1850 onwards.

    The reason being that isotopes in planktonic foraminifera, and alkenones, and Mg/Ca, and oxygen isotopes in ice cores are absolute indicators of temperature. This also goes for algae species composition, e.g. in isolated cores from the Paleocene or Eocene, which will *never* offer a continuous series up to modern (instrumental) times.

    With tree-rings you have to find at least some trunks which do overlap with the instrumental record.

    Than you match up these rings with the instrumental temperature during the growing seasons, and you have the start of a year-by-year chronology. By matching the tree-rings in other trunks, which didn’t grow during instrumental times, up against the rings in trunks that actually did, you extend the chronology further back in time.

    This is not needed with the proxies used by Marcott. They can end millennia ago, and do not have to be calibrated against proxies that extend into instrumental times.

    Any calibration concerns the systematic effects on sediments due to depth/location/retrieval technique etc.

    That is another reason why it is irrelevant if, and how many, of the proxies extend post-1850.

  10. MoreBrocato Says:

    I see that the above reply from Eli and Bob imply that there does not need to be any proxy agreement with the past 150 years (with the proxies chosen in Marcott et al) … But then what makes them valid? Agreement with Mann et al? Agreement with eachother?

    Why make so much bluster about the “scythe” end of the reconstruction if it’s actually the least robust part of the study? [it’s not contrarians that fueled the press about it]

    What’s to stop the automatic accepting of any proxy reconstruction that correctly appears to mimic what you would think would be correct given the Mann et al (etc.) papers of the past 1500 years, no-matter-what the post 1850 readout is? Further, is this conformity now a reason to toss out proxy candidates for assumed contamination or ineffectiveness?

  11. mikep Says:

    What Steve McIntyre has found is that
    1. There is no uptick in the individual proxies
    2. The uptick arises because of drop-out of some low/negative proxies at the end of the period
    3 The dating of the proxies in Marcott is very different from the dating given by the original authors, and this affects which proxies appear in the end period giving the uptick.

    As far as I can see the uptick is just an artefact of a poor methodology.

  12. Bob Brand Says:

    MoreBrocato,

    I mentioned these proxies do not need to be calibrated against the instrumental record from 1850 onwards.

    That is not exceptional by the way. The same goes for sediment cores from ‘deep time’, long before the Holocene – they certainly do not extend to anywhere near modern (instrumental) times.

    Often the chronology (the x-axis) is determined by carbon-dating (at least for samples from this epoch) – where the radioactive decay of carbon-14 proceeds at a fixed rate and determines the fraction of C-14 relative to the other two isotopes of carbon. The half-life of C-14 is 5,730 ± 40 years, and this is part of the uncertainty along the x-axis:

    http://wikis.lib.ncsu.edu/index.php/The_carbon_dating

    The y-axis, temperature, is often determined by the ratio of oxygen-18 to oxygen-16 in the shells of planktonic foraminifera which live close to the ocean surface. The temperature of the water determines how much of each isotope ends up in the CaCO3: the benthic δ18O record. Another way is measuring Mg/Ca ratios, Boron isotopes, or the length of alkenone molecules.

    These are ‘absolute’ indicators of temperature, in the sense that the ratios apply wherever and whenever you measure them – you can measure them in the lab or in modern lakes and estuaries.

    Is there calibration? Certainly, but not against the instrumental record since 1850:

    http://gradworks.umi.com/33/52/3352748.html

    http://wwwrcamnl.wr.usgs.gov/isoig/res/funda.html

    Tree-rings do work differently, since there are no physical laws which prescribe the width of a tree-ring: you have to calibrate each individual tree-trunk for the dose–response relationship it shows to temperature. If it did not grow during instrumental times, you have to match up its’ response to trees that did.

    For this reason you can take isolated sediment cores, and determine chronology and temperature without extending the record to instrumental times.

  13. Bob Brand Says:

    Also, MoreBrocato,

    Why make so much bluster about the “scythe” end of the reconstruction if it’s actually the least robust part of the study?

    This study is not about the modern rise in temperature, we already know how that increased – see the instrumental data in the graph at the top of this page.

    It is about regional and global temperatures during the Holocene.

    The instrumentally recorded +0.8 °C – and how it deviates from the very gradually descending temperatures over the past 5500 years – is what causes the “bluster”.

    The bluster is most certainly warranted, if you include the expected rise in global temperatures over the 21st century, as Jos Hagelaars shows in the graph.

    The study just confirms what we already know about Holocene temperatures, and the way they have generally followed the slowly changing Milankovic-forcing.

  14. MoreBrocato Says:

    Why is “confirming what we already know” interesting enough for a Science or Nature publication level?

    Also, why extend the proxy all the way to 1940 if (a) it’s not robust, (b) it’s not necessary because we’ll always accept the temperature record no matter what slew of proxies show now as opposed to earlier, (c) Marcott’s thesis didn’t make that leap?

    Presumably, you’re saying that any global study in the Holocene that conforms to previous thought (ie, a gentle rise and fall through a grid resolution of 300-years) that stops short of the instrument record, whereupon we can simply splice on what we KNOW about the present (in a monthly resolution much finer) — that’s all we need for major eureka? Not that there is such a thing possible, but couldn’t it also be said that a spike such as the one we’re seeing in the instrument record might be barely detectable in Marcott et al if it occured, say 4000BP (whether or not it intuitively ‘should’ be there, up or down?)

  15. cRR Kampen Says:

    “Why make so much bluster about the “scythe” end of the reconstruction if it’s actually the least robust part of the study?”

    Asks MoreBrocato… Sure, so the actual instrumental measurements naturally provide the least robust information (never ever mind dripping ice and stuff)…

    How about some marvelling at the meeting of the ends of mutually independent ways of reconstructing/measuring temperature?

    Of course it’s a shocking graph. But like Bob Brand said: it simply confirms what was already known. Can fight, flight, or analyse, or weep, anyway: can do anything but think about mitigating the little problem that will shed society as we know it and has even begun to do so (Syria is a climate change war).

    /cRR

  16. cRR Kampen Says:

    “Why is “confirming what we already know” interesting enough for a Science or Nature publication level?” (MoreBrocato)

    Because denial and revisionism exist rampantly and the problem, even if it be so dead simple as to have been foreseen by the end of the 19th century, is the biggest one humanity has to face (or turn away from, of course) since the onset of the last Ice Age.

    As for the possibility of a spike around, say, 4000 BP, first realize that such spikes would have registered for a good number of proxies over the past two thousand years but ‘magically’ do not exist over this period; second you’d have to explain how such a spike came about.
    I experimented a moment with the idea of ‘some huge methane release’ lasting half a century (lifetime in atmosphere of CH4 is < 10 years) and got a dead simple rebuff: that CH4 would've been found trapped in ice core air and found there.
    Please offer some wild theories of your own – Popperian way of testing the graph. Needless to mention we already know the cause of the spike we are in now.

    /cRR

  17. BBD Says:

    Excellent comments by Bob Brand!

    In contrast – and disturbingly so – is the total incomprehension exhibited by the nay-sayers. I am willing to bet this ignorance is widespread, hence the gullibility of that audience.

  18. BBD Says:

    mikep

    As far as I can see the uptick is just an artefact of a poor methodology.

    As far as I can see, you haven’t read the OP.

  19. Jos Hagelaars Says:

    @MoreBrocato

    “Why is “confirming what we already know” interesting enough for a Science or Nature publication level?”
    Because there was only a fragmented view for the Holocene and this study of Marcott et al is the first to scientifically present a temperature reconstruction for the complete period that is called the Holocene. Being the first is always very interesting, that applies to sports, school but also for science.

    “Also, why extend the proxy all the way to 1940..”
    The instrumental temperature data show a steep rise after 1850 on a geological timescale, so you would expect a signal like that in the proxies over the same period. When present it should be mentioned, it adds to the human knowledge, informs other scientists and the interested public.
    The signal was present, but not robust as the scientists mentioned. Therefore I think Marcott et al did the logical thing a scientist should do in my opinion: present the signal and also indicate the uncertainty.
    Not mentioning such a signal because some people do not like the conclusion that could be drawn from this signal in combination with the instrumental data, would be wrong.

  20. John Mashey Says:

    Do time series ever have end effects that raise uncertainty there?
    Google: “time series” “end effects” 23,000 hits

    There is nothing magic about Marcott, et al that avoids this, but as everybody says, we have the modern records. The only real concern would be if the modern data were far outside any reasonable uncertainty interval from that part of the reconstruction. So they say:
    “Although some differences exist at the centennial scale
    among the various methods (Fig. 1, C and D), they are small (<0.2°C) for most of the reconstructions, well within the uncertainties of our Standard5x5 reconstruction, and do not affect the long-term trend in the reconstruction."

    The real problem is that some people simply cannot bear what advancements in science tells us, and prefer to:
    – repeat blog arguments abotu statistics, with little understanding
    – want to stick with "flat-earth-mpas", as in Adoration of the Lamb.

    It would be a lot more straightforward of blog posters if the ywould simply say upfront “under no cirucmstances will i ever accept any evidence that human actions modify climate, and especailly, it can never, ever be true that the post-Industrial rise is anthropogenic.”
    At least creationists are moer straightforward like that abotu evolution.

  21. Doug Proctor Says:

    BBD Says:
    March 20, 2013 at 19:02
    mikep

    As far as I can see the uptick is just an artefact of a poor methodology.

    As far as I can see, you haven’t read the OP.

    Okay …. I reviewed the individual proxies. Didn’t look clear to me. Looked like a mathematical artefact that said there was a global response when it looked like a whole bunch of regional responses.

    And adding instrumental readings of high frequency content to low frequency content is misleading, in that we think the high variation of today is unique. We don’t have data to say that. Smooth with an appropriate running average for the post 1800 data and you won’t get the drama, but you may get a sense of what has been going on for a long time.

  22. toto Says:

    “Why is “confirming what we already know” interesting enough for a Science or Nature publication level?”

    Because:

    1- Cool proxies.

    2- Cool methods to take into account the time uncertainty.

    3- It’s one thing to know something, it’s another thing to quantify it (see Figure 3).

    “Not that there is such a thing possible, but couldn’t it also be said that a spike such as the one we’re seeing in the instrument record might be barely detectable in Marcott et al if it occured, say 4000BP (whether or not it intuitively ‘should’ be there, up or down?)”

    Of course! Marcott insists on the low time-resolution of his method.

    But what we are seeing right now is NOT a spike. It is a warming that will be *stable* for at least centuries, even if we stop all emissions by 2100.

    Marcott would have seen *that* if it had happened. He didn’t, so barring some actual error (as opposed to SteveMc’s “violent agreement”), his conclusion is that it didn’t.

  23. cRR Kampen Says:

    “Smooth with an appropriate running average for the post 1800 data and you won’t get the drama, but you may get a sense of what has been going on for a long time.”

    Scroll to top of page and review Figure 1.

  24. Bob Brand Says:

    MoreBrocato,

    couldn’t it also be said that a spike such as the one we’re seeing in the instrument record might be barely detectable in Marcott et al if it occured, say 4000BP

    In science you work with data you have, and which you can get. In science-fiction you work with whatever implausible idea you can dream up, as long as it does not blatantly contradict physics (or even then). Please see my response here and here, and what John Mashey wrote.

    In theory a global ‘blip’ of +0.8 °C over a period of ~ 200 years might just go undetected.

    But how do you explain such a sudden enormous influx and efflux of heat? And without GHG’s? Because these would certainly register in ice cores, as cRR Kampen already mentioned. A sudden solar variation would show both in Be-10 and C-14, as would a supernova event.

    It would have to be something that leaves no trace, apart from the huge temperature ‘blip’.

    Also, how believable is it that our current +0.8 °C temperature rise will suddenly reverse – and exactly cancel – in the same amount of time in which it has arisen? By what physical mechanism?

    Because that is what *is* needed for it to be a ‘blip’.

    Failing that, it would be a step change – and a step change of +0.8 °C would stick out like a sore thumb in this record.

  25. Paul Kelly Says:

    Questions about Marcott have nothing to do with science fiction or how CO2 influences climate, and they are not just coming from skeptics. Answers from the authors are promised.

    1- Cool proxies. Oddly enough, the marine sediment cores are cooler than the colder ice cores. No tree rings, yeah! Stoat has questions about the number of proxies and their distribution over time and region.

    2- Cool methods to take into account the time uncertainty. Dating of proxies and core tops is the key,

    3- It’s one thing to know something, it’s another thing to quantify it (see Figure 3). One question raised is what were the reasons for changes to Marcott’s doctoral thesis as it went through review for publication?

  26. Bob Brand Says:

    Paul,

    .. have nothing to do with science fiction or how CO2 influences climate ..

    Sadly they do – if the question presupposes a “spike” of the magnitude we have seen since 1900, as well as an equally large decrease over the same amount of time. That is the only way you could get a ‘blip’ of this magnitude with a duration of ~ 200 years.

    Failing that, you have to ask if a longer excursion or a step change of +0.8 °C would register in the record.

    In the article, the authors point out:

    Because the relatively low resolution and time- uncertainty of our data sets should generally suppress higher-frequency temperature variability, an important question is whether the Holocene stack adequately represents centennial- or millennial- scale variability.

    [.. then they describe the Monte Carlo procedure ..]

    The results suggest that at longer periods, more variability is preserved, with essentially no variability preserved at periods shorter than 300 years, ~50% preserved at 1000-year periods, and nearly all of the variability preserved for periods longer than 2000 years (figs. S17 and S18).

    Elsewhere they emphasise that the median 120-year horizontal resolution is reduced with the Monte Carlo procedure. So yes, the ‘blip’ might just about escape detection. But how physical and plausible is such a blip?

    It might be more pertinent to ask about a step change of +0.8 °C.

  27. Ed_B Says:

    I would recommend ClimateAudit for a serious discussion of the facts, rather than the speculations here.

  28. Paul Kelly Says:

    Only time will tell if the 20th century warming represents a step change. It probably is. However the long term pre 1850 cooling trend and the current leveling of temps suggests natural forcings and feedbacks may well be net negative.

    Some say that without human influence, we would be on the long slow path to glaciation. According to John Mashey, without humans CO2 would now be at 240 ppm with corresponding lower temperatures.

  29. Eli Rabett Says:

    Rarely has Eli read anything as incredibly silly as Paul Kelly’s

    3- It’s one thing to know something, it’s another thing to quantify it (see Figure 3). One question raised is what were the reasons for changes to Marcott’s doctoral thesis as it went through review for publication?

    This paper is BASED on the thesis, it is a couple of years down the road, information has been added. This is the normal course of things unless you believe that theses are without fault and written in stone.

  30. Paul Kelly Says:

    Eli,

    Let me rephrase. What new information (or perhaps methodology) was the reason for differences in Marcott’s doctoral thesis and the published paper?

  31. Jos Hagelaars Says:

    In Marcott’s thesis there is only a reference to the RegEM algorithm. In the Science paper more methods are mentioned. Compare figure S4 in the supplemental info from the Science paper with figure 4.2b of the thesis. In both images the RegEM5x5 method does not show the famous ‘uptick’ but the method Standard5x5, used in the paper, does show it.

    We know there is a steep rise in temperature in the last ~100 years from the instrumental data. The Standard5x5 method does show this steep rise, so I would say this method is better than the method used in the thesis. Note that it is not ‘robust’ as the scientist explicitly state.
    I think Eli is right, time has passed since the thesis and methods improve over time.

  32. John Mashey Says:

    Paul Kelly noted my comment about ~240ppm …
    But I rather disagree with the rest.
    There has certainly been a fairly well understood long-term Milankovitch negative forcing, but Paul apparently reversed the signs of natural feedback, because if the balance of natural feedbacks were negative, we’d have much less jagged glacials.. The net feedbacks have to be positive: ice/snow -albedo is positive in either direction, although they work faster getting warmer.

  33. toto Says:

    “Oddly enough, the marine sediment cores are cooler than the colder ice cores.”

    You bet they are, if only because you can find them all across the globe!

  34. Bob Brand Says:

    Jos, Paul,

    As Jos mentions:

    Compare figure S4 in the supplemental info from the Science paper with figure 4.2b of the thesis. In both images the RegEM5x5 method does not show the famous ‘uptick’ but the method Standard5x5, used in the paper, does show it.

    The reason why the Standard5x5 does show more warming at the very end of the record (where data points are sparse), is presumably because no infilling is used. This maximises the weight of the actual data points.

    It is debatable if infilling would be the proper choice. We know from the instrumental record that temperatures are changing quickly over 1890-1950, so infilling based on pre-1890 samples would be questionable.

    So they show ALL the different methods in Fig. 1A-F, and they note that gap-filled and unfilled methods only differ after 1890.

    Without filling data gaps, our Standard 5×5 reconstruction (Fig. 1A) exhibits 0.6 °C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled 5° × 5° area-weighted mean stack (Fig. 1, C and D). However, considering the temporal resolution of our data set and the small number of records that cover this interval (Fig. 1G), this difference is probably not robust. Before this interval, the gap-filled and unfilled methods of calculating the stacks are nearly identical (Fig. 1D).

    Had they stayed with only RegEM, as in the thesis, the Holocene would have looked slightly less warm. See Fig. 1D.

  35. MoreBrocato Says:

    I’m grateful for the responses to my posts. It looks like discussions can occur here.

    I still believe the up-tick at the end of Marcott’s reconstruction is held in high value for this paper, particularly in press coverage (and by that of other scientists– see original blog post above that points out up-tick similarilty to Mann et al. … And yet, it is not a central or robust main conclusion of the paper. It appears that there are several scientists here that have no qualms about setting any troubles about this aside because of what we know the instrumental records show.

    But is there any inconsistency of application here between Marcott et al and even ‘Klotzbach Revisited’ below where the distinction that the “main point” of his paper is not well obtained from the paper– and is pointed out as such, while set aside when Marcott et al hits press? At present, I’m not buying that zero climate scientists (or climate bloggers) highlighted the up-tick at the end of the Marcott reconstruction.

  36. John Mashey Says:

    Clearly, if any press, bloggers or anyone else’s focused on the uptick, the paper is invalid and can be thrown out, and any instrumental temperatures ignored.

  37. Bob Brand Says:

    As The Great Wheelchair Graph shows, a proper comparison can and should be made between:

    1. the glacial -> Holocene transition (Shakun);

    2. the Holocene from 11,300 BP – AD 1890 (robust) and 1950 (probably not robust);

    3. instrumental temperatures e.g. HadCRUT4 AD1850 – now;

    4. projections A1B now – 2100.

    Marcott et al. do not comment on the ‘uptick’ in their paper. They do comment on the comparison between the Holocene and modern (instrumental) temperature change, and on the comparison to projected warming.

    There exists the strange idea that the purpose of the paper is to check or demonstrate the 20th century warming. It is not. This paper is about the Holocene, as far as they could possibly extend the paleo-record.

    It would simply be wrong to delete the post-1890 data from the record, even though it is indeed sparse.

  38. jyyh Says:

    are people suggesting the instrumental record is wrong? it seems to me that there is no conspiracy by the thermometer manufacturers. NASA has checked this by making a fun study to extend the proxies to the instrumental period of temperature records proving again to republicans that they have too much money to spend on literature research.

    http://thinkprogress.org/climate/2013/03/20/1744221/noaa-robust-independent-evidence-confirms-the-recent-global-warming-measured-by-thermometers/

  39. mikep Says:

    What appears to be different between the thesis and the article is that some of the proxies have been redated, changing the set of proxies which appear at the endpoint. This is what gives rise to the uptick.

  40. Bob Brand Says:

    Mikep,

    The datasets which have been used for the Science publication are newer, and more complete, then the ones used for the thesis.

    AFAIK Marcott et al. have made an effort to retrieve the latest and most complete versions of the proxy-data with the respective researchers and research-groups.

    The thesis however was focused on the methodology of the multi-proxy reconstructions and the properties of the algorithm, and for the thesis no effort was made to ask for the latest data (since that was not the focus of the thesis).

    It is only correct that Marcott et al. asked for the most up-to-date datasets and invested the time to align these on the same, uniform timebase.

    And once again we see the misleading diversion that Marcott et al. 2013 is about ‘the uptick’…

    It is not. It is a reconstruction of the Holocene, using the most complete datasets and most diverse methods. They made an effort to extend the record as far as they possibly could.

    And the Science paper shows both the not-infilled version (with a larger rise at the end) as well as the infilled version (smaller rise).

  41. Levi Says:

    I have trouble understanding some of the positions here. None of the criticism of Marcott is claiming that the actual temperature record is wrong (as it is not a basis of the study). But, if the “signal” in the spike on the end is not evident in the source data and was an artefact of methodology, then you apparently have a divergence of the proxies to the instrumental record that calls into question the validity of the proxies.

    Tree rings have a divergence problem that makes them look like weak proxies. If the proxies used here also have a divergence problem, raising the possibility that these are also weak proxies, how can this not call into question the conclusions of the rest of the paper (particularly when trying to compare holocene temperatures to modern temperature)? At the very least it would suggest the proxies showing the divergence might need to be removed and possibly any others that rely on similar forms of measurement.

  42. Bob Brand Says:

    Levi,

    Once again, the ‘spike’ or blip at the end of the record is not the subject of the study. You are once again diverting attention to the 1890-1950 part of the record.

    It is NOT about those 60 years with very sparse data in these low-resolution proxies. The vast majority only runs into the 19th century, or stops even centuries or millennia before.

    It is about the 11,300 years of the Holocene, with much better coverage in these proxies.

    This is the gist of it.

  43. Bob Brand Says:

    That being said, please allow me elaborate:

    1) The record has to end somewhere. Marcott et al. used the latest, most complete versions of these 73 low-resolution proxies. These end during the 19th century and many end even centuries or millennia before. Just a few extend past AD 1890.

    2) “.. if the “signal” in the spike on the end is not evident in the source data and was an artefact of methodology ..

    No, the ‘spike’ IS in the source data, what little there is beyond 1890. It is NOT ‘an artefact of methodology’ and it is NOT a divergence. There is just very little data after 1890.

    3) In the paper and the supplementary material, Marcott et al. show extensively that if you use the source data ‘as is’ (the Standard5x5 method without infilling), you see a considerable spike after 1890. However, if you choose to replace the missing source data over 1890-1950 (because most proxies have no data there) with extrapolated values from the 19th century and before – this is the ‘infilling’ – than you see much less of a ‘spike’.

    This is hardly a mystery: the infilling ‘dilutes’ the few data points after 1890 with ‘guessed’ values based on pre-1890 data.

    Now, what version is the most correct?

    4) Marcott argues (right on the first page) that 1890-1950 is: “probably not robust“, because the results differ between these two methods AND because there is very little data after 1890. Before 1890 these two methods are virtually indistinguishable, they give the same results.

    5) The ‘infilling’ would be questionable during an episode of quick temperature change, because you would be using extrapolated values from a known ‘stable’ episode which masks change. Well, do you actually know/suspect that 1890-1950 had quick temperature change? Of course you do. You know this from the instrumental record and from lots of other paleo-research. But you ALSO know this from the few proxies which actually have data after 1890 – there is a signal, albeit not much data.

    So, what did Marcott et al. choose?

    * First of all, they included all the data from the proxies they could get. This is a no-brainer, this is what they OUGHT to do.

    * Secondly, they point out that 1890-1950 is probably not robust.

    * Thirdly, they show BOTH the Standard5x5 (with the larger ‘spike’) and the RegEM (infilled version, smaller spike).

    In the paper they explain they prefer the Standard5x5 for the reasons I have mentioned in 1 to 5. They also show it makes virtually no difference before 1890. Then they concentrate on the Holocene, as this is their topic.

  44. MoreBrocato Says:

    Thank you for your elaboration…

    “1) The record has to end somewhere. Marcott et al. used the latest, most complete versions of these 73 low-resolution proxies.”

    Would you not say that some of these proxy versions ‘became’ the latest versions during the process of Marcott et al due to their redating of various proxies since the original thesis, which can affect your #2:

    “…the ‘spike’ IS in the source data, what little there is beyond 1890. It is NOT ‘an artefact of methodology’ and it is NOT a divergence. There is just very little data after 1890.”

    Is there not a methodology applied to any dating change in the proxies, and can’t that methodology affect the resultant appearance of a little-data low-resolution proxy? Surely…?

  45. Marco Says:

    MoreBrocato, follow tamino of the next few days/weeks and you get all the answers you want. Or maybe not, depends on how open you are.

  46. Marco Says:

    of = for

  47. mikep Says:

    Bob Brand,
    There do not appear to be any new data on the Science article compared to the thesis. But there is extensive redating. One proxy has been brought forward over one thousand years! The issues are clearly explained in a guest post on Judith Curry’s website here

    Playing hockey – blowing the whistle


    It’s hard to take the field seriously if it defends articles like this (and MBH 1998 and Mann et al 2008) even when the flaws are clearly pointed out.

  48. cRR Kampen Says:

    From the link supplied by mikep: “The ‘blade’ is only present in data for figure S12a’s 20-year sampled reconstruction after 1900. It is not present in the 100-year version that goes to 1900, and which closely tracks the 20-year version to that time. Although lower frequency sampling will mask any earlier changes, significant data changes obviously occurred in the years after 1900 where the blade arose.”

    That’s flabbergasting. We all would’ve thought AGW was a thing UNTIL 1900, after that we sunk into the new Ice Age, didn’t we? What complottery.

  49. Bob Brand Says:

    MoreBrocato,

    What I said is: “Marcott et al. used the latest, most complete versions of these 73 low-resolution proxies.” Meaning the *newest* versions, since dating of the cores is often being extended, amended, re-dated as improved methods become available – or enough student workforce to work on the sampling. ;)

    The PhD thesis was directed at testing/comparing methodology. Marcott simply used published data for that, since it was a ‘proof of concept’ of the RegEM method. For this publication they made an effort to re-date it in a uniform way.

    I have been looking just a tiny bit into the re-dating and some of it seems to be related to changes in C-14 dating of the alkenone cores. Another thing is ‘end-effect’ where multiple samples do/don’t get lumped into the latest available timeslots (so yes, that might influence the 1890-1950 part of the record). But then, Marcott et al. already noted there: “probably not robust“.

    Have you looked into the reconstruction by Nick Stokes?

    http://moyhu.blogspot.nl/2013/03/my-limited-emulation-of-marcott-et-al.html
    http://moyhu.blogspot.nl/2013/03/next-stage-of-marcott-et-al-study.html

    His method is equivalent with Standard5x5 (no infilling):

    With published dating
    With revised dating

    Both show the ‘uptick’, but it is larger with revised dates. The re-dated version agrees very well with Marcott et al 2013.

    One thing that hasn’t been done yet is the Monte Carlo simulation. So in Marcott’s publication the little squiggles get smoothed out, as they should.

  50. mikep Says:

    Its not the radiocarbon dating that makes the big difference – its the coretops.

  51. mikep Says:

    Bob,
    You seem to be slowly getting there. Using the simple average of a changing number of proxies is not reliable at the endpoints. It is quite possible for all the individual proxies to go down, but the average to go up, because the more negative proxies drop out. It could also go down if the less negative proxies drop out. It’s simply a poor method for endpoints. Nick Stokes redating was I think just looking at the radiocarbon dating. He and Steve McIntyre agree this makes little difference. But the big issue is with the coretops. The Science article appears to have arbitrarily changed the dates of the coretops to change the mix of proxies at the endpoints. So one proxy’s end date was brought forward by 1,008 years compared to the originating author’s estimate, others by several hundred years. Other proxies were shifted back. My conclusion is that this study can tell us nothing about the twentieth century though it may have merit further back when the mix issues are less important.

  52. Levi Says:

    I understand that the immediate response will be that this study never purported to say anything about the twentieth century, rather that all the press about this study is filled with people drawing high frequency twentieth century comparisons to a highly smoothed low frequency chart of the holocene.

    That’s why I cannot understand why the grafted Hagelaars chart at the top of the post is being described as “the big picture”. Even if the instrumental record follows that ugly orange vector to 2100, someone running this study again with the same methods again in 2500 when dense data is available is not going to see anything like the “wheelchair chart” since the increase will be smoothed into a more gradual curve. The purpose of many of the methods used was to eliminate high frequency spikes and dips. It illustrates the very apples-and-oranges approach that has people complaining about the shape of the non-robust period. Trying to make a point with such a chart is “even less robust”.

  53. Marco Says:

    mikep,

    You are also slowly getting there. Regardless of whether the uptick is an artifact or not, we already know how the temperature has developed over the 20th century from other sources, and we have a reasonable idea of what it will do after that (depending on the scenario we choose to follow).

  54. Bob Brand Says:

    Hi Levi,

    Even if the instrumental record follows that ugly orange vector to 2100, someone running this study again with the same methods again in 2500 when dense data is available is not going to see anything like the “wheelchair chart”

    The point is that people living through 2013-2500 will NOT run this study again with ‘the same methods’. They will actually experience and accurately measure the GMST’s over this period of time.

    The ‘ugly orange vector’ is the projection for the (rather moderate) A1B scenario – it actually does climb that fast, on the timescale you see above, so the rate of change is broadly correct.

    Note that Figure 1 is on a 24,000 year scale, so it will pretty much look like a step change (about 1% of the x-axis).

    We have talked about the high frequency signals already – as amply discussed by Marcott and Shakun this is smoothed in the low-resolution proxies, but the confidence interval from the Monte Carlo simulations gives an indication of the band within which they should lie. For short-term variability you can also look into the higher resolution proxies by Moberg 2005, Mann 2008, d’Arrigo 2006 and many others.

    I think most people agree a ‘blip’ is not what we’re seeing, more like a ‘step change’ on the scale used in Figure 1.

  55. cRR Kampen Says:

    “I think most people agree a ‘blip’ is not what we’re seeing, more like a ‘step change’ on the scale used in Figure 1.”

    I belong to ‘most’ people in this respect, but I do find kind of a confirmation bias for this view. The point is I know what is causing the step change and I’m quite used to this knowledge, especially as evidence for it piles ever higher and deeper. But suppose we had no inkling of why this particularly large change is occuring, then how could we conclude it must be a step instead of a spike?

    Apart from interesting from a science-philosophical point of view this is a question we might expect from laymen (and even true skeptics among them).

  56. Bob Brand Says:

    cRR,

    Well, we could look at HadCRUT4 on the scale above: the angry red spike which is hardly visible below that ugly orange. ;)

    Jos included that – even if you leave out the orange part, it does stand out.

    Note that Marcott and Shakun are quite careful to point out that we have not necessarily already exceeded the full range of Holocene temperatures:

    Our results indicate that global mean temperature for the decade 2000–2009 has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.)

    Tamino disagrees:

    But perhaps if you include the confidence interval shown in Marcott 2013, Marcott and Shakun are right: we could still be at about the level of the warmest temperatures of the early Holocene (globally).

  57. cRR Kampen Says:

    Well Bob, every time I look at that graph I think of composition and ends, and concentrate on what is realized, that being silencely astounding already.
    Even though I knew already for years it is really that bad.

    On whether the Anthropocene already passed het Optimum, I’m calling it too close to call, but my bet would be with Tamino. The check would be on evidence for or against a couple of decennia/centuries of ice free Arctic seas for at least a third to half a year. Clearly at current global temperature the Arctic ice cannot exist other than seasonally, now I wonder whether this could have been so in early Holocene. Maybe the bears know?

  58. Eli Rabett Says:

    cRR, we know that it is a step change because we know the driver, greenhouse gas forcing, and we know that the increases cannot be reversed on a century timescale. We do need to explain that to people. Ray P and David Archer do good jobs on that:)

  59. Jos Hagelaars Says:

    @Levi

    “..since the increase will be smoothed into a more gradual curve.”

    I do not think so. As Eli mentions, the parameter that drives the temperature will not magically disappear completely in 2100. Even if we stop emitting greenhouse gases, mainly CO2, it will take a very long time before the CO2 concentration will be down to pre-industrial levels. The chemical reactions that are responsible for CO2 removal from the atmosphere are very slow; absorption by the ocean in 2 to 20 centuries, bicarbonate formation by reaction with calcium carbonate on a timescale of 3000-7000 year and silica weathering on a timescale of about 10-100 kYr.
    See for instance Archer et al 2009:

    Click to access archer.2009.ann_rev_tail.pdf

    This means that the rise in temperature with respect to pre-industrial levels will also remain for a very long time. To get an impression, look at figure 1c in Fröhlicher et al 2010:

    Click to access froelicher10cd.pdf

    The red curve reaches +3°C in 2100 and then all CO2 emissions stop. Even in the year 2500 the surface warming will still be +2 °C. A study in the year 5000 with some smoothing over a couple of centuries will still generate a graph with a step change for this period.

  60. John Mashey Says:

    The “best” that humans have ever done, albeit unwittingly in CO2 reduction is our part of the ~9ppm drop from ~1525 to 1600AD.

  61. John Mashey Says:

    Jos: if you get in the mood for diddling the graph, a nice addition might be a rectangle that covers 8000BP-present, and the maximum temperature range during that period, i.e., human civilization, with a number for that range, so one need not eyeball it.

  62. Jim Bouldin Says:

    I’ve appreciated from day one what you try to do in these discussions Bart, but this kind of thing proves nothing really. I realize this was not written by you, but the point remains. There are gaping holes big enough to drive trains through in much of paleoclimatology and yet people still glom on to this stuff as if it actually means something important.

    I never cease to be amazed how the online climate discussions move from fixation to fixation on particular studies, someone initiating and invariably a whole bunch of others then dutifully following, without even addressing the fundamental questions that make or break such studies, while the rest of the literature gets ignored.

    Don’t people in this field ever get tired of this crap?

  63. Bart Verheggen Says:

    Hi Jim,

    You have a point that there is a certain degree on fixation on this and other studies, perhaps to the detriment of a) fundamental questions (your point) and b) the big picture (my favorite point).

    As for b), I think the “wheelchair graph” is a very powerful and insightful image. I also think that the big picture of this graph is robust, even if in details there’s plenty of quibbles and fundamental questions left to address.

    Paleoclimatology being close to your field of expertise, it is natural for you to focus on a). Others, like me and probably Jos, focus more on b). Both are valid perspectives, both are needed, but they’re also different.

    That said, your point of putting single papers in the perspective of the wider literature is certainly important, esp for the big picture.

  64. Bob Brand Says:

    Jim,

    I agree with Bart that many papers seem to be ‘hyped’ by the blogosphere, which is a consequence of the ‘battle’ (yes, that is what it is called in some circles) between climate-skeptic blogs and more mainstream blogs.

    The result is paranoia considering some particulars of these papers, but the larger context tends to get lost. That is an important feature of the graphic by Jos Hagelaars: placing these reconstructions in context with current temperatures and projections.

    Both Shakun 2012 and Marcott et al. are broadly in line with most previous published research – some show a colder LGM or a less prominent Younger Dryas. The HadCRUT4 and A1B projections are hardly controversial either – they are what is generally supported. Of course the A1B is ‘only’ a projection (a moderate one, at that).

    On the whole this is the context which is supported by mainstream science.

  65. cRR Kampen Says:

    Jim, aside from the points brought forward by Bart and Bob I’ll answer you’re 10:16 question from a different take.

    Climatology does not differ from other sciences in its history of paradigm evolution and generation of a few iconographic pictures (or formulas). It does also not differ in the aspect of the natural sciences in that it accumulates knowledge and is able to depict ever better summaries of the total of what is known. Such summaries often make it to the public, e.g. ‘E = Mc2’ as the icon of relativity theory. Are we, after a century of this, ‘tired of this crap’?

    Before the ‘Wheelchair’ the iconic graph in climatology was the ‘Hockey Stick’. As (paleo-)climatology advances only two kinds of advancements over the ‘Hockey Stick’ could be expected. 1) Refinement, better resolution of the 2000 years of climate spanned by the ‘Hockey Stick’ and 2) Prolongation into history.

    Marcott et al did the second, not based on a single publication or investigation, but using basically the whole gamut of paleoclimatology knowledge spanning an epoch reaching back into the last ice age. The instrumental record spanning last 150 years or so added to it, means we got a figure for wat global temperature was like and more: what global tempature díd over the entire Holocene. And we can compare it to what is happening during our life times.
    The ‘Wheelchair backrest’ results from a ‘most likely’ scenario for the remaining part of this century. It is speculation, but it is speculation based on laws of physics and empirical evidence piled ever higher and deeper.

    It seems to me you feel climatologists have been telling different stories every which way some hype goes. I disagree absolutely. Me having lived more or less in this subject for about thirty years (the time scale of the ‘hockey stick blade’) basically heard – and understood ór shunned basic radiation physics – just two simple messages. 1) Carbondioxide is a GHG and more of it has consequences; 2) Global temperature is rising at an unprecedented scale – I knew this in 1989 and, of course, ever since.

    The message just gets more detailed and clearer over time. Especially as weather gets worse, of course.

  66. Jim Bouldin Says:

    Bart, I’m a “big picture” guy as much as the next guy. But that’s a scale issue, and regardless of scale, you have to make sure your methodological procedures are completely reliable in this kind of work. There is enormous potential for error in paleoclimate estimation, and too much glossing over of this fact, to the point that questions of outright denial, or incompetence at best, being to arise in the mind. After what I’ve discovered in dendroclimatology, not to mention other things I’ve witnessed in science, there’s not a snowball’s chance in hell I’m taking anything at face value anymore.

    Bob Brand: more or less the same point. Context is fine if your results are reliable, and consensus, in and of itself, means nothing. The consensus can be wrong, has been many times in history.

    cRR Kampen: I think it was clear that I was talking about climate discussions on the internet, not the achievements of Einstein, nor paleoclimatic literature. And terms like the “hockey stick” and now “the wheelchair!” (spare me), are symptomatic of how impoverished and lame these discussions have become.

    The issue here is that the public discussion of the science on blogs has just broken down. Very few are explaining the real nuts and bolts to people or having thorough and wide-ranging discussions on critical topics, sticking absolutely to the science alone. But since Bart has been probably the #1 person trying to remedy that situation, I don’t think I need to say more on it.

  67. Jim Bouldin Says:

    For those who don’t know what I’m referring to regarding dendroclimatology, start here and work forward:

    Severe analytical problems in dendroclimatology, part one

  68. cRR Kampen Says:

    “And terms like the “hockey stick” and now “the wheelchair!” (spare me), are symptomatic of how impoverished and lame these discussions have become.”
    [Jim Bouldin 19:45]

    Those graphs depict reality. You’re quite welcome to suggest better ways to describe and name stuff. Meantime we don’t have to get lost in msm-happy metaphors, do we?

    So if the discussion got lame, how come?

  69. Jim Bouldin Says:

    The discussions got lame because there aren’t enough scientists engaged in a thorough and understandable discussion of the issues involved, and nature abhors a vacuum.

    Those graphs do not “depict reality”. If you think that, you have a lot of learning to do. You can start with tree rings if you like, I’ve laid the issues all out clearly and at length.

  70. cynicus Says:

    Reading your posts, Jim, I get that your point is there are remaining issues. So what? It’s science, there are always remaining issues. To me the real question is: what’s the impact of these issues on the results and our knowledge?

    Having seen many reconstructions, using many different independent sources and methodologies each with their own strengths and weaknesses but with comparable results, I doubt that fixing the remaining issues will invalidate the previous results altogether. The impression I get from this is: those issues may be real but apparently the impacts are not significant enough to invalidate the results or that scientists working on these reconstructions are aware of the issues and either judge them to be insignificant or compensate for them.

    So in my opinion, instead of simply pointing to possible issues and deducing purely from the existence of these issues that the results must be invalid, you should show: what are the effects of those issues on scientific understanding. To proof that your point is valid and the existing reconstructions are invalid it is required that you show how the results differ between the current methodologies and your improved ones or how these issues affect the outcome.

    I know this requires more work then simply pointing out the existence of possible error sources, but it’s the only way science will advance.

  71. Jim Bouldin Says:

    Cynicus, you’ve missed a very major point of my articles.

    I **did** in fact show what the effects of the described analyzed problems are on reconstructions–and not just *existing* reconstructions, but *any* possible reconstruction. To be able to address your point is exactly why I took a theoretical approach to the issue of extracting a true trend. And I also provided R code, along with extensive verbal descriptions and graphic results, that demonstrate exactly why this is the case. See part 5 for a link to that R code and run it yourself, and part 12 for the detailed discussion of why a trend cannot be extracted. And read the whole series, carefully. I put a lot of work into it and I know what I’m talking about on this.

    The “there are remaining issues, always will be so what” argument is the standard panglossian response used by many scientists whenever their work is shown to be critically faulty. I’ve seen it over and over again, used as a +/- complete cop out.

    Also, this idea that various reconstructions are all “broadly similar” is a meaningless phrase, and in many cases, flat out untrue. Another pangloss.

  72. Jim Bouldin Says:

    Forgot to state there, and just so the point is absolutely clear:

    It is **not possible** to confidently state anything about long term climatic trend estimates, either their central tendency or their confidence limits, using tree ring size, with the existing set of available methods and data. This **invalidates** essentially all large scale reconstructions. Yes all of them, they are not trustworthy. It’s serious, very very serious. And until people start to actually take it seriously, I’m going to keep ramping up the publicity on it.

    And on top of that…even if someone would try to wiggle around that issue, they are immediately confronted with the very serious set of issues presented by non-linear biological responses generally, and described most clearly w.r.t. tree rings and climate by Loehle (2009).

  73. cRR Kampen Says:

    Jim Bouldin, which of Marcott et al’s proxies are tree rings, again?

    Click to access Marcott.SM.pdf

  74. cynicus Says:

    Jim, I readily admit not having read all of your blog articles very carefully or performed the same level of research as you did, and I don’t intend to. I also know that you have a paper going through the mill at the moment that addresses some issues with dendro proxies. I also believe you’re sincere in your critique, if that helps you to put my comments into context.

    Having not done my own proxy-analysis I have to retort to trusting an authority and this is where it get’s tricky: there have been many people claiming to know about ‘this’ and asking to trust them ‘because’. McK, McI and a host of other blog critics come to mind who thought they had found critical issues. I think history has shown many of these to be unfounded or at least not as influential as the authors purport them to be.

    And make no mistake: I want to believe that the current warming is nothing special in the Holocene, I want to believe that we’re not the cause, etc. so to me there are a lot of soothing and appealing but bullshit arguments on blogs. Without having a deep understanding myself I will never be able to separate the chaff from the wheat. To make the best decisions I and everyone else who did not study this field in depth have no other rational choice but to choose the literature as the authority.

    Looking at the spaghetti graphs I do think that the many independent reconstructions do agree pretty well with each other in context of the anomalous 20th century warming. None shows the fast recent warming or the amplitude expected this century. I’m interested in why you think this is meaningless, even when leaving out all dendro proxies which you criticize.

    These points above show why you need to put your critiques in the literature and have the paleo community as a whole agree:
    A) that they’re valid.
    B) that they invalidate the conclusions in the context of the current warming.
    It is the only way scientific progress is made.

  75. cynicus Says:

    As cRR indicates both conditions A and B in my previous post are not met as your critique on dendro proxies does not apply to the non-dendro proxies used in Marcott et al.

    Or so to condense my previous post to two sentences:

    There are a lot of people claiming to be a Galileo overthrowing an established scientific field and few fields have seen more Galileo claims then those in climate science. The problem for those is; history shows only very few actually are.

  76. Jim Bouldin Says:

    Don’t waste my time Kampen, whoever you may anonymously be, I have zero patience for the playing of games. The central point here is that you have to look very carefully at these kinds of studies, to make sure you’ve got sound analytical procedures applied to sound data. That’s the essence of science. I doubt if you know a single thing about the alkenone saturation or Mg/Ca ratios upon which the study relies.

  77. Jim Bouldin Says:

    Cynicus, I’m sympathetic to part of your argument (at 16:30); you state it clearly and I’ve seen others state essentially the same thing, which boils down to: “I have to trust what the “mainstream” of science says on these issues, because they’re paid to get it right and I’m not”. Which is exactly why trust problems arise when those scientists do **not** in fact have it right and fail to recognize or admit it, and correct it.

    At the same time, when somebody does a lot of work to bring complex and unfamiliar issues down to a general public level of understanding, including turnkey code and lengthy descriptions, and that information is ignored, then the onus of irresponsibility switches directly to the buyer, not the seller. Anyone is more than welcome to ask any questions they like at my blog; the reality is only a small handful ever have. I’m pretty sure I know why that is too.

    Make no mistake about it, there is an element out there that will not admit any fundamentally critical mistakes in science, for various possible reasons. These people create *lots* of problems, and as long as they keep that attitude, they will have a fight on their hands from some of us.

    As for the “yeah sure, everyone’s a Galilleo aren’t they” putdown, I’m willing to ignore that for the time being.

  78. Jim Bouldin Says:

    “Looking at the spaghetti graphs I do think that the many independent reconstructions do agree pretty well with each other in context of the anomalous 20th century warming. None shows the fast recent warming or the amplitude expected this century.”

    You cannot possibly be serious. You just got done telling me you have to trust the scientists on these issues because you don’t have the expertise. In the light of that, and the data presented in the fourth graph above, or in any graph of last millennium reconstructions based on tree rings, how can you possibly say such a thing with a straight face?

    The answer you’ve heard exactly that from certain scientists pushing that viewpoint, and echoed ad nauseum at various places.

  79. cynicus Says:

    Jim, I don’t have a problem with you sniping at the paleo community or me for referring to Galileo. You’ll need all your sense of being right to try to further your ideas, as do the ones you’re disagreeing with.

    I do have a problem with you accusing them of holding on to their ideas because they are paid to get it right. You’re paid to get it right too, I surely hope! Only the two of you disagree but that’s what the scientific literature is for: to have academics discuss their different opinions. Do you expect academics who think they are right to roll over just because you write on a blog that you are right and they are not? C’mon that’s not how it works! You put your ideas in the literature and the others will beat the hell out of it. If it still stands afterwards you’ll be agreed upon, and I promise you: even by me.

    Not before.

    I like to draw an analogy about you blaming me for not willing to accept your claims: There are lots of people writing series of blog posts every day claiming to be right about .e.g. intelligent design, plate tectonics or the Tychonic system. Do you study everyone of these carefully before you disagree or dismiss them? Even if the author has put in a lot of effort to bring these complex issues to a layman level and you don’t have a lot of expert knowledge in that area? I bet you don’t, you resort to the mainstream consensus like almost everyone else and don’t spend much time on the issue. Am I right?

    Now, are your ideas so much different and more interesting that everyone should come over and delve deep into the material and ask you questions? To you, clearly: yes. You’re emotional about it, as are many about this topic. But it’s not so to me. Please allow me to be blunt: all I see is yet another guy who thinks he’s right and everyone else isn’t.

    Again, I am not in a position to judge whether you are right or not, only your peers are. You can explain to me very plainly how treerings don’t respond to climate at all and I might think I understand the ‘truth’ as you’ve written it, but I still am not able to judge if you forgot something important, made an fundamental error in your assumptions or that the effects are compensated or simply not very important. etc.

    That is why I wrote earlier: “there are a lot of soothing and appealing but bullshit arguments on blogs.”. Your argument may sound logical and appealing to me, but I simply lack the knowledge and skills to properly evaluate them. So I have only one option: to resort to authority which, I’m sorry, your (any) blog is not. That’s how it is, call me lazy, I don’t care. Laziness has little to do with it, a relevant education and years of experience is what I’m lacking. Perhaps you too.

    Godspeed in trying to get your ideas in the literature and have it stand over time, it’s the only way science advances.

  80. cynicus Says:

    “The answer you’ve heard exactly that from certain scientists pushing that viewpoint, and echoed ad nauseum at various places.”

    Yes, Jim, that’s how it works for a layman like me: I **have** to trust the consensus until the field is persuaded to change the consensus to something else. Then I **have** to trust the new consensus while knowing they were wrong before.

    Because I cannot possibly know any better!

    Now it’s up to you to try to persuade the field to change the consensus, vested interests or not. You’ll probably know like I do that the best idea will win in the end, even if it means that some established academics and their ideas need to die out first to have it happens. But if it happens, when it happens, I’m with you.

    Jim, I’ve read a nice book called ‘A short history of nearly everything’ and it relates to our discussion here. It shows painfully clear that rarely has the scientific consensus been right from the start on *any* topic. But it also shows that very rarely an established field is overturned and almost always changes in knowledge are incremental and forward.

    If I’m correct that you’re arguing here that all paleo temperature construction are worthless despite the different independent types of proxies, methods and general agreement among the proxies? Yes, I see that individual proxies in the spaghetti show quite different behavior because e.g. strong regional climate differences, but the big picture from the spaghetti graph constructed from a few ice-core, sediment and pollen proxies agrees nicely with the average from Marcott who used a host of other proxies as well.

    This correlation may be baloney as you seem to claim, but so much correlation between so much different proxies can’t be all pure coincidence, there must be a large portion of sausage in there too!

    Anyway, I’m sure the developers from these different types of proxies are interested in theirs are subjected to similar large errors like the dendro proxies seem to have and how these errors invalidate their work. Or how they somehow all missed those problems you seem to be certain about. It would give them ample ammunition to get more funds to develop new proxies as knowing the approximate temperatures of the past is still highly interesting.

  81. Eli Rabett Says:

    Cynicus and cRR are pointing out that there is substantial agreement between very different proxy sets. Now some, not Eli to be sure, think that Jim is arguing that there are no possible proxys.

  82. Jim Bouldin Says:

    Cynicus, do you have a serious learning disability or do you just prefer to twist people’s words so as to get attention? As already stated, I have no patience for game playing or spending time trying to defend myself w.r.t. your twisted interpretations.

    If, as you stated before, you don’t have the ability to understand the material at level needed to discriminate between right and wrong, then just stay out of the discussions until you do. I don’t need anybody telling me what is and is not a valid way of recognizing valid and invalid concepts in science, least of all a non-scientist who doesn’t understand the issues and admittedly won’t even try.

    Kapish?

  83. Eli Rabett Says:

    Jim how about you stop trying to bigfoot every bunny? Cynicus is right, knowing everything about everything is impossible unless you name is John von Neumann so the best way of handling the situation is to trust the experts and try and learn the basics from them. On occasion one of them gets a wild hare up his ass, so mostly you ignore him until he brings the others around.

    And just to save you some time, your mother wears army boots too.

  84. Jim Bouldin Says:

    Eli, how about you spend some time trying to understand the science issues involved instead of making smartass comments to those who have?

  85. cynicus Says:

    Hey Jim, thanks for being such a cheerful bunny and keeping your posts civil when faced with my overwhelming ignorance. I sure hope you won’t receive from your academic peers what you give to others. Perhaps they’re sure they won’t need anybody to tell them what’s valid and what’s not too?

    Anyway, good luck with your quest for recognition!

  86. Nick Stokes Says:

    Jim,
    If you want to transfer your tree-ring worries to the Marcott study, you need to spell out how they apply. You referred to your first blog post, which listed three issues:

    “(1) ring width, being the result of a biological growth process, almost certainly responds in a unimodal way to temperature (i.e. gradually rising, then rather abruptly falling), and therefore predicting temperature from ring width cannot, by mathematical definition, give a unique solution,”
    I don’t believe these predominantly marine or aquatic micro-organisms and chemical species behave in complex ways with T – they are not generally under environmental stress. The relation usually used between UK37 and T, for example, is linear.

    “(2) the methods used to account for, and remove (“detrend”) that part of the long term trend in ring widths due to changes in tree age/size are ad-hoc curve fitting procedures that cannot reliably discriminate such trends from actual climatic trends,
    AFAIK, the proxies used by Marcott don’t have to be separated from confounding variables by any such step

    “(3) the methods and metrics used in many studies to calibrate and validate the relationship between temperature and ring response during the instrumental record period, are also frequently faulty.”

    For these proxies, no such calibration or verification is required.

  87. Jim Bouldin Says:

    Cynicus, you’d make a good academic, they much prefer their self-defined version of “civility” over actually getting things, you know, like right or whatever.

    Nick, I appreciate you being the sole person actually addressing my original point. However, you’re wrong on points 1 and 3, and even with point 2 there can in fact be an algal growth rate effect on alkenone saturation ratios. The alkenone and Mg/Ca proxies both have potential confounding factors that have to be controlled for, including nutrient and light levels, species composition, and nonlinear responses, among other things, and *everything* in paleoclimate requires calibration and validation, so I don’t know where you get that idea. There’s a large literature on all of this, a lot of uncertainty in these things, and if I had the time I’d write a piece on it. Which was my whole point to begin with, before certain people felt the need to deflect the conversation elsewhere.

    Not even to mention that the study has zero attribution elements to it, and combined with the low temporal resolution of the proxies used, therefore cannot be related in any way to the issue of current CO2 effects on climate. But that ain’t stopping people from implying or stating that it does of course; there’s a contingent of people who will glom on to *anything* that supports the so-called “hockey stick”, regardless of how it was derived.

    As I said before, holes big enough to drive a train through.

  88. Nick Stokes Says:

    Jim,
    ” so I don’t know where you get that idea”
    On 3, you said:
    ,i>”the methods and metrics used in many studies to calibrate and validate the relationship between temperature and ring response during the instrumental record period,”

    These proxies do not calibrate and validate the relationship between temperature and ring response during the instrumental record period. They do not require an overlap at all; many don’t have one. I just cannot see any applicability of (3) at all.

  89. Bob Says:

    Jim Bouldin, you make some very cogent points in your responses. I also read part 1 of you blog and have to say well done. Kapish? or capisce.

  90. JV Says:

    The official recent temperature records for estimating global temperatures (IPCC) have had an interesting history. Note this article states that official recording station count has dropped off markedly since the 1980’s.
    : http://www.appinsys.com/globalwarming/GW_Part2_GlobalTempMeasure.htm

    Most GHCN stations are in the U.S.

    Urban stations are not included in the measurements. I leave it to the reader to estimate the world urban land area .
    http://www.demographia.com/db-intlualand.htm

  91. JV Says:

    http://www.scientificamerican.com/article.cfm?id=cities-may-triple-in-size-by-2030

  92. Eric Says:

    I bet $50 that Dr. Bouldin does not get an interview with Andrew Revkin or a cover of Science.

  93. Bob Brand Says:

    Eric: but neither do you or I, so that would be a rather safe bet.

    By the way, Marcott 2013 did not get on the cover of Science either. There is a photo of a very interesting chip – a Silicon ion trap – on the cover:

    http://www.sciencemag.org/content/339/6124.cover-expansion

  94. Eric Says:

    I take it back, given the way that Revkin was used and now abused on this paper by the paleo’s he might be angry enough to offer Dr. Bouldin some space.

    Now that would be science jouranlism worth reading. I suggest we would all be better off if Science would do the same.

  95. Jos Hagelaars Says:

    Anyone who has questions or remarks about the scientific details like uncertainties, robustness, different algorithms or calibration methods as present or used in the Marcott et al paper can do so on RealClimate and get answers from top-scientists (see also the update at the end of the post).

    http://www.realclimate.org/index.php/archives/2013/03/response-by-marcott-et-al/

    The view of the RealClimate team on the Marcott et al paper is quite clear:
    “Our view is that the results of the paper will stand the test of time, particularly regarding the small global temperature variations in the Holocene.”

    Also the following statement by Tamino is interesting:
    “In my opinion, the Marcott et al. reconstruction absolutely rules out any global temperature increase or decrease of similar magnitude with the rapidity we’ve witnessed over the last 100 years. And the fact is, we already know what happened in the 20th century”.

    And this statement of Ray Pierrehumbert about the big picture:
    “Unless I misunderstand the authors’ remark about time scale, the Marcott et al paper doesnt tell us much about the centennial variability in the Holocene. It does put the CO2-induced warming of the present into a valuable perspective, though, because we know that the CO2 induced warming will last for millennia, and that the millennial warming we will get even if we hold the line at a trillion tons of carbon emissions is vast in comparison with the millennial temperature variations over the whole span of human civilization. That’s the real “big picture” here.”
    I could not agree more.

  96. Paul Price Says:

    Jos,

    I found your Figure 1 so compelling that I added in temperature bands and the simplistic emissions outlook, see here here at https://docs.google.com/file/d/0B5NgIqKD_aX4RWF1MGZ4YjhDVzQ/edit

    Obviously this is intended purely as policy messaging, from the science as currently understood, and not as a comment or development of the science. My interest is in how the latest science and your presentation combining the graphs (perhaps adapted in some way as I have done) can inform policy.

    Paul

  97. Jos Hagelaars Says:

    Hi Paul,
    Nice graphic! I have no problem with the usage of my graph by others, as it is based on public data.
    It is quite clear that humans are heading into unknown territory and we could better do something about that. This last part has again proved very difficult as shown by all the wrangling regarding the interesting study of Marcott et al.

  98. Paul Price Says:

    Jos,
    Thanks very much! Your presentation is really excellent in showing Marcott in context, I am just playing with extending its messaging. I wrote a blog post crediting you of course, at climie.blogspot.ie

    As you say making it abundantly, or even embarrassingly, clear that action is needed is the aim (properly based on the science of course).

    I would like to add tCO2 on the right side of the graph https://docs.google.com/file/d/0B5NgIqKD_aX4RWF1MGZ4YjhDVzQ/edit to give an idea of the tCO2 per added ºC of warming. If anyone can weigh in on that I would appreciate papers. I have a couple of refs but any others welcome.

    The ‘wrangling’ is hardly even that is it. Pielke is reduced to complaining about the press release, and Revkin is complaining about the realclimate q&a being released on a Sunday. Pathetic really!

  99. Jos Hagelaars Says:

    Paul,

    Maybe these EPICA Dome C CO2 data from the NOAA Paleoclimatology site could be useful to you:
    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc-co2.txt
    The Shakun et al article (as referenced in my post) mentions something about placing these EPICA Dome C data on a more accurate timescale by Lemieux-Dudon 2010, but I haven’t checked that out.
    There is a small increase in CO2 concentration after 7000 BP while the global temperatures were slowly going down as shown by Marcott et al. The Milankovitch parameters do have an influence on long-term temperature changes too.

    For the modern record you could use the data as made available by Nasa Giss:
    http://data.giss.nasa.gov/modelforce/ghgases/Fig1A.ext.txt
    And the Siple Dome data:
    http://cdiac.ornl.gov/ftp/trends/co2/siple2.013

    Using these data you will be able to create a graph that resembles my graph in Figure 1. No coincidence of course. There is a difference: the ratio of the increase of CO2 in the modern age (~120 ppm) to the CO2 increase in the 20000 years before (~100 ppm) is larger than the ratio of the temperature changes regarding the same periods.

    You can find the IPCC 2007 scenario-CO2 data on the Dutch KNMI Climate Explorer site, under ‘Radiation’:
    http://climexp.knmi.nl/selectindex.cgi?id=someone@somewhere

    The RCP concentrations as used in the CMIP5 climate model calculations can be found here:
    http://www.pik-potsdam.de/~mmalte/rcps/

  100. Bob Brand Says:

    Hi Paul Price,

    .. and Revkin is complaining about the realclimate q&a being released on a Sunday.

    LOL

    That made my day. It schouldn’t get any sillier than that – just imagine, having to blog about it on a Sunday! Although it is probably only due to Andy Revkin hoping for a day off, finally, on Easter.

  101. Bart Verheggen Says:

    Jos wrote:

    “There is a difference: the ratio of the increase of CO2 in the modern age (~120 ppm) to the CO2 increase in the 20000 years before (~100 ppm) is larger than the ratio of the temperature changes regarding the same periods.”

    True. The reason is that in climbing out of the last ice age, CO2 forcing was about half the total (the other half being due to albedo change, where both were actually acting as a positive feedback on longer timescales). Right now, CO2 forcing is about equal to the net forcing (the other GHG and aerosols approximately cancelling each other – very rough numbers mind you). So the temp change per CO2 change was larger between last ice age and now than it is for the current (and future) warming.

  102. Paul Price Says:

    Jos,

    Thank-you very much indeed for taking the time to give the references, much appreciated, I will look at the refs.

    So far on CO2 emissions = ppmCO2 = Temperature for the graph I have:

    From Meinshausen et al Nature, 1000 GtCO2 = 1TtCO2 = 25% chance of over 2ºC assuming linear medium-term relation of final equilibrium temperature as in http://www.earth-syst-dynam.net/4/31/2013/esd-4-31-2013.pdf

    However, in Matthews http://www.nature.com/nature/journal/v459/n7248/full/nature08047.html has CCR of 1.0–2.1 °C per trillion tonnes of carbon (Tt C) . I am confused. Should this be TtCO2? Otherwise does not correspond to Meinshausen?

    Just to give the very crude outlook for emissions this century I have 100ppm rise corresponding to a 1ºC rise in global average surface temperature corresponding to . So 450 ppm = 2ºC, 550 ppm = 3ºC etc.

    I know this is ‘unscientific’ but for policy types we have to present the overview of the science outlook in general terms.

    Bob Brand: LOL is right!

  103. Paul Price Says:

    Bart:

    Thanks very much for your reply. As you can probably tell I am not ‘deep climate science aware’ enough to trust myself to deal with the raw data, though I will look at it.

    Communicating the likely implications of ongoing emissions right now is what I’m aiming at, the current and future warming. If you can give me an idea of the ranges that would be great.

    For example if ECS = 3ºC that implies that for a 280 ppm CO2 increase over pre-industrial every 95 ppm increase roughly corresponds to 1ºC in equilibrium temperature rise. If Meinshausen (above) figure is used then 1TtCO2 for 2ºC rise (25% chance) implies 500 GtCO2 per 1ºC.

    So: each 1ºC rise = about 100 ppm = 500 GtCO2

    You can see how rough I am being but I think this is how policy-makers have to be approached if the idea of ‘safe’ emission paths and carbon budgets are to be conveyed with any impact.

    Paul

  104. Bart Verheggen Says:

    Paul,

    That’s not entirely true. The temperature effect of CO2 is logarithmic, ie each doubling of CO2 has the same effect (eg from 280 to 560 ppm has the same T-effect as going from 560 to 1120 ppm). It is thus not correct to state that each 100 ppm would correspond to 1 degree C eventual (it takes time!) rise in T. The error in the linear assumption would be small though if you only apply it over a small range.

  105. Paul Price Says:

    Bart,

    I am now very puzzled because the references I have and talks I have heard have seemed to make it very clear that there is linear relation of temperature to CO2 emissions. Please could you give me a reference or two for the logarithmic relation that you describe?

    In this paper by Raupach http://www.earth-syst-dynam.net/4/31/2013/esd-4-31-2013.pdf Fig 6 bottom right gives a very linear relation of Temperature to CO2.

    I received an email from a senior climate scientist here in Ireland saying that Matthews above and this reference by Stocker

    Click to access stocker12scix.pdf

    It starts with this on the relationship:
    “Robust evidence from a range of climate–carbon cycle models shows that the maximum warming relative to pre-industrial times caused by the emissions of carbon dioxide is nearly proportional to the total amount of emitted anthropogenic carbon (1, 2). This proportionality is a reasonable approximation for simulations covering many emissions scenarios for the time frame 1750 to 2500 (1). This linear relationship is remarkable given the different complexities of the models and the wide range of emission scenarios considered.”

    These references point to the relationship being linear but I may have not asked you correctly or we may be talking about different things. Many thanks for the help in this.

    I understand that these are ‘eventual’ temperatures, but also that other non-linear effects may apply, potentially accelerating warming.

  106. Jos Hagelaars Says:

    Paul,

    When doing calculations with gigatonnes of carbon, you must keep in mind that 1 Gt carbon comes down to 1 * 44/12 = 3.7 Gt CO2 (the difference in molecular mass).
    Also from the weight of the atmosphere it can be calculated that 1 ppm CO2 relates to ~2.13 Gt carbon or 7.81 Gt CO2.

    From the abstract it follows that Matthews et al takes the carbon cylce response into account: how much of the emitted CO2 is absorbed by oceans or used for the growth of the trees/plants when the temperature and CO2 concentration will go up. But probably also an estimate is made of the amount of CO2 that will be released by the warming of the permafrost et cetera.
    I can’t tell exactly how they reach their results: for every trillion tones of carbon emitted there is an increase of 1 to 2.1 °C independent from the background CO2. Doing some math:
    1000 Gt carbon = 1000/2.13 = 470 ppm CO2. Assuming that about 50% is taken up by the ocean+land sink, the increase in CO2 is 235 ppm from 280 ppm in pre-industrial times, which comes down to ~515 ppm CO2 total.

    Meinshausen et al mentions 1000 Gt / 1440 Gt CO2 above the 2000 level when I’m correct. This comes down to 50% of 128/184 ppm = 64/92 ppm CO2 above the 2000 level of 370 ppm or ~434/462 ppm CO2 total. Roughly comparable to Matthews.

    The relationship between CO2 and the amount of energy (and therefore temperature) it adds to the atmosphere is logarithmic as Bart says. It is described by this formula for the change in forcing with reference to a starting CO2 concentration: dF = 5.35 * ln(CO2/CO2-start) in W/m2.
    A doubling in CO2 means CO2/CO2-start = 2, this comes down to 3.71 W/m2 per doubling of the CO2 concentration in the atmosphere. Taking all the feedbacks into account, which amplify or reduce the CO2 effect, you will end up with a rise in temperature of 2 – 4.5 °C per doubling of the CO2 concentration. This 2 – 4.5 °C is the equilibrium climate sensitivity and it will take a while until equilibrium is reached, centuries to millennia. See this RealClimate post: http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/
    Some references regarding the logarithmic relationship:
    http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/222.htm

    Click to access myhre_grl98.pdf

    If I’m correct, the linear relationship in e.g. Rapauch between the cumulative total CO2 emissions and temperature is based on exponentially growing CO2 emissions that compensate the logarithmic relationship between CO2 and temperature. The abstract says:
    “This implies that, if the carbon-climate system is idealised as a linear system (Lin) forced by exponentially growing CO2 emissions (Exp), then all ratios of responses to forcings are constant.”.

    Hope this helps a little.

  107. Bart Verheggen Says:

    Temperature depends logarithmically on CO2 concentration (be sure to distinguish emissions from concentration. Esp for a long lived compound such as CO2 they can evolve very differently).

    This is a good on-line textbook that should help clarify the kinds of questions you have: http://www.climate.be/textbook/index.html

    The quote you give is about total (i.e. integrated over time; cumulative) amount of CO2 emitted being proportional to eventual T-rise. That is correct, and has been found by multiple authors (e.g. Nature Climate Crunch few years ago, Allen et al, Meinshausen et al). If you talk about “emissions” sec, I interpret that to mean instantaneous emissions, and in that case it is not at all correct.

  108. Paul Price Says:

    Jos and Bart,

    Many thanks to both of you for all of your help, especially in resolving my lin/log confusion. Now I understand the basic compensation reason but I will try to get into reading the references you have kindly given me.

    Somehow messaging climate to citizens and policy makers needs to get better so by working on it together like this we can hopefully improve that communication.

    Here is a link to a new draft, https://docs.google.com/file/d/0B5NgIqKD_aX4RmxjUXFjYkQxaWM/edit?ups=drive_web
    As you can see it has taken a few hours (obviously it is intended as a Commons effort).

    Inevitably this poster/graphic is a balance between too much and too little information especially as the intention is too convey the science to people who, like myself, not fully aware of the detailed climate science.

    The presentation in this draft is ‘doomy’ but it seems to be what the science is saying in precautionary terms. If either of you have suggestions, corrections, thoughts I would welcome them. For example, temp anomaly date range may be wrong on left hand side of chart. Warming dates are from Stocker, 2012 as above.

    Thanks again,

    Paul

  109. Paul Price Says:

    Graphic revised again,
    changed last line in “Ess. Facts” plus other minor stuff:
    https://docs.google.com/file/d/0B5NgIqKD_aX4aFdHZnRodVQtTVE/edit

  110. 21fossil Says:

    Paul Price,

    How does your graphic of the latest science inform policy?

  111. Paul Price Says:

    21Fossil

    I would hope that Joss Hagallaar’s graphic and my more wordy version speak for themselves and you can go to the science for more.

    As to policy, my graphic in part tries to show that we are, right now, making critical choices that will affect the amount of greenhouse gases that will add to the accumulation of CO2 in the atmosphere that causes global warming. Certainly, the current path means there is a very high risk of extremely dangerous warming of 4ºC above pre-industrial temperatures, strongly indicating that radical emissions reduction is needed from now on, especially beginning immediately in wealthy countries.

    For more look at the research of Kevin Anderson and Alice Bows.

  112. Max Thabiso Edkins (@maxthabiso) Says:

    Thanks Jos Hagelaars. Paul, I really like your new graphic!

  113. Gregor Hagedorn Says:

    Can you share the data behind the graph, to make it easier to create a Wikipedia-enabled (open source) version of the graph, citing your work and the underlying data, of course?

  114. Jos Hagelaars Says:

    Hi Gregor,

    The data I used for constructing this “wheelchair” graph are publicly available.
    A1B multimodel global mean:
    http://www.ipcc-data.org/data/ar4_multimodel_globalmean_tas.txt
    HadCRUT4:
    https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html
    Marcott et al.:
    http://www.sciencemag.org/content/339/6124/1198/suppl/DC1
    Shakun et al.:
    https://www.nature.com/articles/nature10915#supplementary-information
    I used 1961-1990 as the reference period and shifted the Shakun et al. data manually by 0.25 °C because they didn’t supply a reference period but used only one year (1950) as zero-value.

    In the supplementary information of Clark et al. 2016 you can find a nice dataset of the Shakun and Marcott data joined together with the same reference period 1961-1990:
    https://www.nature.com/articles/nclimate2923#supplementary-information

    In Schellnhuber et al. 2016 you can find a similar graph as the wheelchair graph, their figure 1. Maybe they used my idea 😊.
    http://www.nature.com/nclimate/journal/v6/n7/full/nclimate3013.html

    Hope this helps,
    Jos

  115. Gregor Hagedorn Says:

    Thanks a lot Jos! –Gregor

  116. Paul Hirth Says:

    Excellent “wheelchair” graph. The term “Present” is used to mean a variety of yea. For example for Greenland ice cores, it usually means 1950, with graphs ending at -95 BP. In your wheelchair graph, what is the last year graphed for the blue Marcott et al temperatures?

  117. Jos Hagelaars Says:

    Paul,

    The last year in the blue Marcott et al. temperatures is 10 BP, which is the year 1940 AD.
    The x-range of the graph is in BC / AD.

Leave a comment