Tropospheric hot spot

by

The current topic under discussion at ClimateDialogue is the tropospheric hot spot: Is it there, and if not, so what? Invited discussant are Steven Sherwood of the University of New South Wales in Sydney, Carl Mears of Remote Sensing Systems (working on the RSS satellite product) and John Christy of the University of Alabama in Huntsville (working on the UAH satellite product).

I’ll provide a short overview here (loosely based on the intro over at CD), interspersed with my own and other people’s commentary.

Based on theoretical considerations and simulations with General Circulation Models (GCMs), it is expected that any warming at the surface will be amplified in the upper troposphere. The reason for this is as follows: More warming at the surface means more evaporation and more convection. Higher in the troposphere the (extra) water vapour condenses and heat is released. Calculations with GCMs show that the lower troposphere warms about 1.2 times faster than the surface. For the tropics, where most of the moist is, the amplification is larger, about 1.4.

This means that, contrary to what some people claim, the hot spot is not specific to the enhanced greenhouse effect: Any surface warming (or cooling) would be expected to be magnified higher aloft, at least in the tropics. Lindzen says it as follows:

We know that the models are correct in this respect since the hot spot is simply a consequence of the fact that tropical temperatures approximately follow what is known as the moist adiabat. This is simply a consequence of the dominant role of moist convection in the tropics.

In the public comments Ross Mckitrick confirms that

amplification would [also] be observed in response to increased solar forcing.

This change in thermal structure of the troposphere is known as the lapse rate feedback. It is a negative feedback, i.e. attenuating the surface temperature response due to whatever cause, since the additional condensation heat in the upper air results in more radiative heat loss.

Jos Hagelaars explained it as follows in his post Klotzbach revisited:

When the earth gets warmer, air can contain more water vapor. This also has an impact on the lapse rate, since more water vapor means more heat transfer to higher altitudes. This effect on the lapse rate is called the lapse rate feedback. More heat at higher altitudes implies that there will be more emission of infrared light, a negative feedback. This effect is particularly important in the tropics.

The (negative) lapse rate feedback is tightly coupled to the (positive) water vapor feedback: The uncertainty (or at least the model spread) in their combined effect (thought to be a positive feedback) is less than the uncertainty in each of them individually (see e.g. AR4 fig 8.14).

Some –but not all- observations indicate less amplification of warming trends aloft than models predict (based on relatively undisputed physics as noted above). The question is whether this is a significant difference, taking into account the various uncertainties, and if so, what’s the cause and the implication? ThingsBreak had a good overview of the hot spot discussion on SkS.

The ‘official’ questions we posed for the invited experts were as follows:

1) Do the discussants agree that amplified warming in the tropical troposphere is expected?

2) Can the hot spot in the tropics be regarded as a fingerprint of greenhouse warming?

3) Is there a significant difference between modelled and observed amplification of surface trends in the tropical troposphere (as diagnosed by e.g. the scaling ratio)?

4) What could explain the relatively large difference in tropical trends between the UAH and the RSS dataset?

5) What explanation(s) do you favour regarding the apparent discrepancy surrounding the tropical hot spot? A few options come to mind: a) satellite data show too little warming b) surface data show too much warming c) within the uncertainties of both there is no significant discrepancy d) the theory (of moist convection leading to more tropospheric than surface warming) overestimates the magnitude of the hotspot

6) What consequences, if any, would your explanation have for our estimate of the lapse rate feedback, water vapour feedback and climate sensitivity?

We’re first trying to get clarity on Q1 and Q2. For Mears and Sherwood the answers would probably be “yes” and “no”, respectively. Whether Christy agrees is less clear. He frequently referred to the hot spot as a model-predicted consequence of the enhanced greenhouse effect, giving the impression that he regards it as a fingerprint specific for a greenhouse mechanism. Lindzen and McKitrick paved the “physical” way for him, though.

We can then move on to Q3, on which there seems to be ample disagreement. And finally the “so what” question(s) 5 and 6.

Some impressions from the discussants’ replies to each other:

Steven Sherwood:

We [Carl Mears and I] agree that the data we have are basically not stable enough over time to distinguish whether a “hot spot” exists or not, or is as prominent as we would expect. […] As to John Christy’s post, I don’t really think he’s being forthright about the uncertainties in the data. His Fig. 1 does not identify what datasets are actually being used, but Carl’s own plots show that the results depend on this. Also, the plot states that the model calculations he compares to are based on scenario “RCP8.5,” but that is a high-emissions future scenario so I am puzzled by why he is doing this — there are historical simulations in CMIP5 that are meant for comparing with observations.

John Christy argues that the observations which do agree reasonably well with observations are prone to warm biases. He does not address the uncertainties or biases in those datasets which diverge from model simulations (as also noted by Sherwood in the comment quoted above):

STARv2.0 contains a spurious warming shift on 1 Jan 2001 which will be corrected in the new v3.0 to be released later this year. So, STAR’s results in Mears’s contribution overstate the warming. MERRA is an outlier dataset for temperature trends with problems, including a significant warm shift between 1990 and 1992 probably due to the inability to correct for infrared contamination from Mt. Pinatubo. As a result, MERRA produces the warmest tropospheric trend by far of all observational and reanalysis datasets, being almost +0.10 °C/decade warmer than the average of the balloons. (…) It is understandable that Mears highlights RSS data since he is the source of the product, but evidence has been published to demonstrate the RSS TMT product likely has spurious tropical warming due to an apparent overcorrection of the diurnal cycle errors (e.g. Christy et al. 2010). (…) Thus there are clear reasons for not highlighting RSS, MERRA or STAR as observational datasets.

Christy also expresses some disdain for models in general and an odd (and skewed) view of uncertainty (as if the presence of uncertainty should preclude any policymaking and if uncertainty is one-sided only). This was addressed by Carl Mears (and before him by Chris Colose):

I think all three of us agree that the observed temperature changes in the tropics (and globally) are less than predicted over the last 35 years. John uses this fact to argue that there are fundamental flaws in all climate models, and that there results should be excluded from influencing policy decisions. This goes much too far. First, many imperfect models are used to inform policy makers in many areas, including models of the economy, population growth, environmental toxins, new medicines, traffic flow, etc. etc. As pointed out by a commenter in this thread, policy makers are used to dealing with uncertain predictions. If we throw out all imperfect models, we will be reduced to consulting the pattern of tea leaves on the bottom of our cups to make decisions about the future. Second, as I argue below, there are many possible reasons for this discrepancy, and only a few substantially influence the long-term predictions.

Chris Colose wrote in the public comments (addressing the “so what” question):

Steve Sherwood correctly concludes that there is no obvious connection between a tropical hotspot and climate sensitivity. In fact, because the greenhouse effect depends on the temperature difference between the surface and layers aloft, lack of upper-level amplification could actually mean a slightly higher climate sensitivity, since the lack of enhanced infrared emission aloft (with no hotspot) would be compensated for by higher temperatures lower down to restore planetary energy balance. This would be a small effect though, and somewhat counteracted by a weaker water vapor feedback.

To be continued…

Tags: , , , , , ,

13 Responses to “Tropospheric hot spot”

  1. Arthur Smith Says:

    There doesn’t seem to be a lot of interest in this one – odd, because I think John Christy’s and Ross McKitrick’s claims regarding the “average” trends (when what they are averaging differs by a factor of about 3?!) seem deeply unscientific and, frankly, should be damning to Christy’s reputation (McKitrick doesn’t exactly have one).

  2. Bart Verheggen Says:

    Arthur,

    Taking into account that we got some big names participating in this discussion, I’d have expected a bit more interest as well.

    Perhaps you’d like to elaborate on what’s wrong with Christy’s argument, preferably via a comment at CD.

  3. Arthur Smith Says:

    I’ve already commented at CD, this really ought to come from somebody else. Maybe we can get tamino to comment.

    The bald-faced assertion that Christy made is “to take a more unbiased approach to the observations, I had simply calculated the mean of the two categories of datasets (satellite and balloon separately) to reduce the random error opportunities. In this way the impact of independent errors that lead to trends that are too warm or too cool may be limited. The fact the tropospheric trends from the average of two very different and independent set of monitoring systems, i.e. balloons and satellites, are within 0.01 °C/decade of each other, lends confidence to the result.”

    McKitrick asserts similarly: “Since the observational record involves two different systems (radiosondes and MSU) and the average balloon record does not differ from the average MSU series (MMH Table III), the credibility of the observed record merits serious consideration.”

    But look at Mears figure 2 in his intro. What Christy and McKitrick are claiming is that you can trust the observations because the averages of the 3 triangles is almost the same as the average of the red square and the black square (and of course don’t include the blue square (STAR)). These assertions render me near speechless!

    I mean, reading off Mears’ graph roughly we have three numbers (0.03, 0.11, 0.13) being compared with two other numbers (0.07, 0.14) that ALL should be roughly independent observations of the same thing (don’t even start on their throwing out the one high one, STAR at 0.17) and essentially asserting that the real observed value is very close to 0.1 because the average of the first three (0.09) is close to the average of the second (0.11).

    Any reasonable analysis would take all 6 “observational” numbers and figure out the standard deviation as at least a rough measure of likely observational error – I get an average of 0.11 and plus or minus 0.10 for 90% confidence interval. So the observations are clearly completely incapable of excluding the 0.20 expected (from Mears’ straight-line curve) based on theory at anything like a reasonable significance-level. Asserting otherwise is so misleading, well, again, I feel speechless.

  4. Bart Verheggen Says:

    Thanks Arthur. That’s helpful. I’ll bring it up in due time.

  5. Arthur Smith Says:

    Marcel Crok’s latest comment there is very reasonable, focusing on the specific trend and uncertainty numbers that people have indirectly talked about but not been very specific. I hope we get real responses from everybody!

  6. Bart Verheggen Says:

    Steven Sherwood makes the point very well:

    John Christy reasons that if the means of two different subsets of the data are roughly the same, then we know the answer, even if there is large scatter within each subset. That is very interesting: according to that reasoning there is no longer any doubt about equilibrium climate sensitivity, because the average of the models and of various estimates based on past data are each around 3C (e.g., IPCC 2007).

    As for his reasons for rejecting various datasets, they seem like subjective, a posteriori rationalisations. Every dataset shows rapid changes somewhere or another which look like they could be artificial, or has some design limitation. Is there any peer-reviewed paper using objective criteria to show that the datasets John rejects are truly worse than the others? My 2008 paper showed that the warming trends from the UAH version of MSU TMT at the time were significantly smaller than those from radiosonde data, in a fairly consistent manner across different parts of the globe, while the other two analyses available at the time were consistent with the sondes (please compare the comprehensive global approach in that paper with the pick-and-choose methods one sometimes sees). This was never either refuted or acknowledged by John who continues to maintain that his products are the ones to believe.

  7. Paul S Says:

    Hi Bart,

    The log in page at Climate Dialogue doesn’t seem to be working at the moment: I get a 404 error. Do you know anything about that?

  8. Bart Verheggen Says:

    Seems to be working fine for me; is it still the case on your end?

  9. Paul S Says:

    Still not doing anything in IE10.

    In Firefox a password box pops up, which I haven’t seen previously, and instructs to enter ‘antagonist’ as username and password. Doing that takes me to the normal WordPress log in page and I can get in ok.

  10. Bart Verheggen Says:

    Hi Paul,

    We’ll have the computer guys look into it. Thanks for the heads up.

  11. Bart Verheggen Says:

    Paul,

    Apparently some registered users have been inadvertently deleted when cleaning up spam. If that’s the cause, then re-registering would be the only solution. Our apologies for the inconvenience.

  12. Paul S Says:

    Bart,

    My original username and password work fine when I can get to the log in screen, but I couldn’t get there in IE10.

    I’ve found the problem now: my browser security protected mode was checked which was apparently preventing that initial pop-up box from opening. It’s switched off now and I can get through.

    Is that pop-up box asking you to enter ‘antagonist’ really necessary? Protected mode “on” is the default in IE10 so I would think more people might be getting blocked.

  13. Bart Verheggen Says:

    I think they increased the security settings because of the amount of spam. Glad you got in though.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 126 other followers

%d bloggers like this: