Jump to content
  • Member Statistics

    17,611
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

2015 Global Temperatures


nflwxman

Recommended Posts

Well if you are focusing on TMT data...sure...maybe. I was talking about the TLT data vs the surface.

 

I'll disagree with the climate sensitivity claim as it's been coming down some recently...esp using empirical data plus the IPCC's own best estimates on energy budget. Then there is whole debate of TCR vs ECS and which is more practical. But all of that is probably a topic for a different thread.

 

What exactly are you disagreeing with? Are you saying the middle range of the ECS is 1.5C? Maybe the middle of the range is now a bit less than 3C, but it's no where near 1.5C - which is what would be dictated if UAH were actually perfectly accurate.

 

Tropical TMT is the most important when diagnosing tropospheric amplification and the water vapor feedback, but TLT is still quite important too. If the TLT trend is actually .14C/decade, while the surface trend is .16C/decade, that's the opposite of the expected amplification and would imply a water vapor feedback much smaller than expected.

Link to comment
Share on other sites

  • Replies 1.3k
  • Created
  • Last Reply

What exactly are you disagreeing with? Are you saying the middle range of the ECS is 1.5C? Maybe the middle of the range is now a bit less than 3C, but it's no where near 1.5C - which is what would be dictated if UAH were actually perfectly accurate.

 

Tropical TMT is the most important when diagnosing tropospheric amplification and the water vapor feedback, but TLT is still quite important too. If the TLT trend is actually .14C/decade, while the surface trend is .16C/decade, that's the opposite of the expected amplification and would imply a water vapor feedback much smaller than expected.

 

ECS of 3.0C. Most empirical-based studies are trending lower than that now. The actual ECS is probably less important than TCR though...but I understand that there are still plenty of papers that argue higher...so it's a debate that is strong in the literature.

 

The difference of 0.02C between the two trends (TLT vs sfc) is within both of their margins for error...it's pretty meaningless statistically. So I see no real point in wasting so much energy debating it, but to each his own. If the difference was much larger, then it would be more interesting.

Link to comment
Share on other sites

ECS of 3.0C. Most empirical-based studies are trending lower than that now. The actual ECS is probably less important than TCR though.

 

The difference of 0.02C between the two trends (TLT vs sfc) is within both of their margins for error...it's pretty meaningless statistically. So I see no real point in wasting so much energy debating it, but to each his own. If the difference was much larger, then it would be more interesting.

 

While the TLT and sfc tempatures are all in the MOE, these empirical studies are extremely sensitive to small changes in natural (PDO/ENSO) contribution, spatial coverage, and aerosal concentration contribution.  I'd be willing to bet that this sensitivity will cause estimate to begin to go back up as the temperature recovers on the next decade or so.  Though many have "trended" lower, there is still a large median of studies around 2.5-3K.  To further illustrate the point, the ECS of Zhou Et al. 2013 was run including and excluding 2003-2012.  The ECS went from 3.1 to 1.9C depending on which 3 decade period you choose. Hard to buy into any particular outlier at this point with so much variability in the same ECS methodology.

 

Maybe a conversation for another thread, but we've already warmed .8-1C.  I can't imagine we ONLY warm another 1C if we pump CO2 to 560ppm.

Link to comment
Share on other sites

professors?

I'm guessing SoC said something about that. that does sound ridiculous.

anyways.

People need to get over this idea that if 6 months where a super nino causes a massive dump of heat into the LT Instead of being stored or spread into the ocean (see OHC).

It doesn't mean the Earth hasn't warmed since 1998.

What makes 1 year, 1 month, 1 day the benchmark?

Smooth that out to 3 years or 5 years and 1998 is blended completely in with how the atmosphere was then.

And its a lot cooler than today.

UAH is averaging a 0.33C for the year.

With tropics literally just at normal.

What do you think will happen if a major nino breaks out?

uah record is like 0.41c.

A major nino would certainly cause a 6 month period of 0.50 to 0.70c easy.

The earth has been collecting heat in the oceans the entire period.

I simply ask why have temps over Ice and land risen so much if radiative forcing isn't strengthening?

I hear you man, apparently this is not good enough for most people and it somehow escapes their narrow perspective. Like the ocean heat thing, we live on a patch of land on a water planet.

 

The land surface is subject to so many variables and responds more quickly to long-term weather regimes. You won't find the 'climate picture' there, no matter how much you look.

Link to comment
Share on other sites

While the TLT and sfc tempatures are all in the MOE, these empirical studies are extremely sensitive to small changes in natural (PDO/ENSO) contribution, spatial coverage, and aerosal concentration contribution.  I'd be willing to bet that this sensitivity will begin to go back up as the temperature recovers on the next decade or so.  Though many have "trended" lower, there is still a large median of studies around 2.5-3K.  To further illustrate the point, the ECS of Zhou Et al. 2013 was run including and excluding 2003-2012.  The ECS went from 3.1 to 1.9C depending on which 3 decade period you choose. 

 

 

If you use a long period of record where your endpoints are 15-20 year means, then it doesn't matter nearly as much where your endpoint is...this is what Nic Lewis' study did when they used 150 years. If you changed your endpoint from 1992-2007 to 1997-2012 as the endpoint, it only affected the ECS by about 0.05C. That study also used OHC from Levitus et al 2012....but OHC won't affect the TCR, only the ECS.

Link to comment
Share on other sites

ECS of 3.0C. Most empirical-based studies are trending lower than that now. The actual ECS is probably less important than TCR though...but I understand that there are still plenty of papers that argue higher...so it's a debate that is strong in the literature.

 

The difference of 0.02C between the two trends (TLT vs sfc) is within both of their margins for error...it's pretty meaningless statistically. So I see no real point in wasting so much energy debating it, but to each his own. If the difference was much larger, then it would be more interesting.

 

OK so the middle range of ECS is more like 2.5C - but the middle range is NOT 1.5C. Even on empirical based studies, which are the studies that yield the lowest ECS, that tends to be the low end of the range. 

 

3 is still the middle of the AR5's 1.5-4.5C range. 

 

There is only a .02C difference in their actual trends, but the TLT trend SHOULD be bigger than (not equal to) the surface trend, so the difference is statistically meaningful. 

 

And if we're looking at TMT the discrepancy is much bigger. 

 

So either 1) climate theory and models are wrong or 2) MSU products are uncertain

 

The AR5 clearly leans towards #2 and for good reason according to Mears 2011 - MSU products have trend uncertainty on the order of .1C/decade - far larger than surface trend uncertainty. 

Link to comment
Share on other sites

There was definitely a slow down

 

 

 

I hear you man, apparently this is not good enough for most people and it somehow escapes their narrow perspective. Like the ocean heat thing, we live on a patch of land on a water planet.

 

The land surface is subject to so many variables and responds more quickly to long-term weather regimes. You won't find the 'climate picture' there, no matter how much you look.

 

 

Here is UAH with a 5 year smoothing. 

 

 

 

CyUss4K.png

 

 

GISS

 

aazgVfH.png

Link to comment
Share on other sites

That was our brief hiatus window, 15% chance of a second one occuring this century. This is how the Earth redistributes energy, and once we start breaking into the insane forcings past 450PPM, it becomes even less possible.

 

Even more so because the energy imbalance is so dang high because of this ocean stuff. It's literally off the charts and stuff will start stairstepping like crazy in order to keep equilibrium at least within 1/4 of where it should be.

 

Nobody understands how unprecedented AGW is in the long-run or even short-term?, except mabye James Hansen. We are due for an accelerated period of warming based on internal dynamics and the inability for carbon sinks to absorb more CO2.

 

Without strong policy efforts, we will have a canfield ocean in a few centuries, and that is an instant reboot for life. Will take another 100 million years to recover.

 

There may have been a literal lapse in radiative forcing between 2005-2010 but that is gone, it was not realized to have ended until now because of temporal climate lag. A nasty combination of epic solar min, and anomalous trades. Nasty because it created a false sense of security that wasted precious time and fooked up our ECS budget.

 

 

Sulfide-Rich Oceans May Have Impeded Evolution for Eons

August 16, 2002      Email Print Story
 
A theory that suggests the oceans were far poorer in oxygen 1 billion to 2 billion years ago may answer an important evolutionary puzzle, says a pair of researchers from the University of Rochester and Harvard University in today's issue of Science.
For some unknown reason, eukaryotes, the kind of cells that make up all organisms except bacteria, got off to a slow start compared to their prolific bacterial cousins until about 2 billion years after they first appeared in the geologic record. The Rochester and Harvard scientists propose that a new and controversial model of the Earth's oceans may be able to account for the billion-year mystery.
Ariel Anbar, associate professor of earth and environmental sciences and chemistry at the University of Rochester, and Andrew Knoll, professor of evolutionary and organismic biology at Harvard University, suggest that although scientists have long thought that the oceans' history had only two major stages, a third, intermediate stage could explain certain fossil patterns better than the two-stage model could.
The two traditional stages-anoxic (without oxygen) in the earliest years of Earth's history, and the oxic oceans of today-were defined by the disappearance of banded iron formations, massive layers of iron that exist in the rock strata around the world, about 2 billion years ago. To form these deposits, the oceans must have had high concentrations of dissolved iron. But since iron is removed from water when it reacts with oxygen, these bands of iron could only form in an anoxic ocean. This suggested that the oceans became oxic about 2 billion years ago. However, an intermediate ocean stage recently proposed by D. E. Canfield of Odense University, Denmark, suggests that the deep sea did not become rich in oxygen, but rather rich in hydrogen sulfide-the compound that gives rotten eggs their foul smell-between 2 billion and 1 billion years ago. According to Canfield, the oceans did not turn oxic until after 1 billion years ago. The model can also explain the end of banded iron formation because iron is also removed from water when it reacts with hydrogen sulfide.
After learning about Canfield's hypothesis, Anbar and Knoll began to consider its biological consequences. They realized that if the Canfield ocean model were correct, then some metals other than iron would have reacted with the sulfidic water and become very scarce. Some of these metals are important to life in general and to eukaryotes in particular. One of these metals, molybdenum (Mo), is needed by eukaryotes to help take in nitrogen from seawater. If molybdenum were in short supply, eukaryotes would have had a tough time getting enough nitrogen to survive. Today molybdenum is more abundant in the oceans than any other metal because it coexists well with oxic water. So after 1 billion years ago, when Canfield suggests the oceans became thoroughly oxygenated as they are today, molybdenum would have become abundant enough allow eukaryotes to take in the much-needed nitrogen and flourish.
This scenario, Anbar and Knoll found, fits the evolutionary evidence. Fossils and the presence of biological compounds suggest that eukaryotes arrived on the scene as far back as 2.7 billion years ago, but were not as successful at proliferating as bacteria until after 1 billion years ago.
"The ancient biological record is very hard to read," says Anbar, "but it looks as though something changed at that time. We know that metals are an important link between ocean chemistry and biology today, so it makes sense that a similar link operated in the distant past."
Anbar acknowledges that more data is needed to determine if Canfield's model is correct, but hopes that the possible connection between the ocean chemistry of Earth's "middle age" and the record of eukaryotic evolution will spur more research into the complex history of the oceans and their effect on the evolution of life.
"It's remarkable that we aren't sure if the oceans were full of oxygen or hydrogen sulfide at that time," says Anbar. "This is a really basic chemical question that you'd think would be easy to answer. It shows just how hard it is to tease information from the rock record and how much more there is for us to learn about our origins." Anbar and Knoll are actively working on new measurements to tackle the problem.
Link to comment
Share on other sites

OK so the middle range of ECS is more like 2.5C - but the middle range is NOT 1.5C. Even on empirical based studies, which are the studies that yield the lowest ECS, that tends to be the low end of the range.

3 is still the middle of the AR5's 1.5-4.5C range.

There is only a .02C difference in their actual trends, but the TLT trend SHOULD be bigger than (not equal to) the surface trend, so the difference is statistically meaningful.

And if we're looking at TMT the discrepancy is much bigger.

So either 1) climate theory and models are wrong or 2) MSU products are uncertain

The AR5 clearly leans towards #2 and for good reason according to Mears 2011 - MSU products have trend uncertainty on the order of .1C/decade - far larger than surface trend uncertainty.

I think you are arguing against your imagination. I never claimed the central estimate was 1.5. You made that up on your own.

I do lean toward the lower end though as many of the newer studies have values >3C as much less likely now. IPCC is a bit behind on the curve on this IMHO since they take so long to put their reports together. They didn't have several of the post-2012 papers in there.

The more important aspect of a lot of these studies though is not the ECS but the TCR being 20-30% lower than IPCC.

Link to comment
Share on other sites

I think you are arguing against your imagination. I never claimed the central estimate was 1.5. You made that up on your own.

I do lean toward the lower end though as many of the newer studies have values >3C as much less likely now. IPCC is a bit behind on the curve on this IMHO since they take so long to put their reports together. They didn't have several of the post-2012 papers in there.

The more important aspect of a lot of these studies though is not the ECS but the TCR being 20-30% lower than IPCC.

 

I'm not saying you did. I'm just saying that whether the center of the range is 2.5C or 3C it most certainly is not 1.5C (which is the main point I was making). 

Link to comment
Share on other sites

Even if that's what AR5 was arguing, why do you trust Mears et al 2011, which relies on the forementioned radiosonde analysis in part to calculate uncertainty in the MSU/AMSU sounding analysis?

http://onlinelibrary.wiley.com/doi/10.1029/2010JD014954/full

Furthermore, Mears et al 2011 is a relative outlier with its uncertainty estimate, which is much larger than the 0.05C/decade estimate given by Spencer et al.

 

 

I don't trust Mears 2011 - the AR5 does. And I trust the AR5 over you.

 

Moreover, Penckwitt doesn't appear to rebut Mears at all. It only addresses one merge in satellites (MSU4 to AMSU9) and concludes, using their own independent merging calibration, that the UAH method is reasonably accurate for that one merge.

Link to comment
Share on other sites

I don't trust Mears 2011 - the AR5 does. And I trust the AR5 over you.

Moreover, Penckwitt doesn't appear to rebut Mears at all. It only addresses one merge in satellites (MSU4 to AMSU9) and concludes, using their own independent merging calibration, that the UAH method is reasonably accurate for that one merge.

The problem is that Mears et al 2011 was published 4 years ago. Since then, both RSS and UAH have tweaked their allocated merging algorithms to account for orbital drift and sensor degradation (UAH twice w/ the degradation of AQUA, RSS only once). This is addressed by Penkwitt et al 2014..which creates a unique dataset based on the merging algorithms as of 2013.

Bringing up one paper, (a slightly outdated one, at that), to support a larger-than-consensus error potential isn't exactly scientific. You need to explain why you believe Mears et al 2011 is still applicable now, in 2015. Appealing to a perceived authority doesn't do anything to further your case.

Link to comment
Share on other sites

The problem is that Mears et al 2011 was published 4 years ago. Since then, both RSS and UAH have tweaked their allocated merging algorithms to account for orbital drift and sensor degradation (UAH twice w/ the degradation of AQUA, RSS only once). This is addressed by Penkwitt et al 2014..which creates a unique dataset based on the merging algorithms as of 2013.

Bringing up one paper, (a slightly outdated one, at that), to support a larger-than-consensus error potential isn't exactly scientific. You need to explain why you believe Mears et al 2011 is still applicable now, in 2015. Appealing to a perceived authority doesn't do anything to further your case.

 

This tells me all I really need to know in the bolded.

Link to comment
Share on other sites

Huh? What does it tell you?

My point was it's relatively outdated. Why not use the latest literature available? The methods used in Mears et al 2011 are no longer applicable to the extent they were back in 2011-12.

 

 

Why don't you post the rebuttals then?

 

All you've given us one study that validated one specific merge in the MSU record and concluded it was relatively accurate. It doesn't even prove that there is not significant uncertainty introduced by that merge - all it seems to prove is that the merge is relatively accurate. 

Link to comment
Share on other sites

Why don't you post the rebuttals then?

All you've given us one study that validated one specific merge in the MSU record and concluded it was relatively accurate. It doesn't even prove that there is not significant uncertainty introduced by that merge - all it seems to prove is that the merge is relatively accurate.

Mears et al 2011 is using a somewhat old, outdated merging formula/procedure to determine a portion of its uncertainty potential. I'm sure it's still someway applicable, but there's been a lot of newer literature published since 2012-13 with updated uncertainty estimates. Do you need me to link some of these for you?

If you'd rather read through Mears et al 2011, here's the link: http://images.remss.com/papers/rsspubs/Mears_JGR_2011_MSU_AMSU_Uncertainty.pdf

Link to comment
Share on other sites

Mears et al 2011 is using a somewhat old, outdated merging formula/procedure to determine a portion of its uncertainty potential. I'm sure it's still someway applicable, but there's been a lot of newer literature published since 2012-13 with updated uncertainty estimates. Do you need me to link some of these for you?

If you'd rather read through Mears et al 2011, here's the link: http://images.remss.com/papers/rsspubs/Mears_JGR_2011_MSU_AMSU_Uncertainty.pdf

 

Yes I'd love it if you could link these improvements other than Penckwitt .. that study alone reduces the overall uncertainty through the 35 year period very little as it pertains to only one merge (nor does it say that there is no uncertainty in that merge - only that the merging procedure used by UAH and RSS is adequate). 

Link to comment
Share on other sites

Yes I'd love it if you could link these improvements other than Penckwitt .. that study alone reduces the overall uncertainty through the 35 year period very little as it pertains to only one merge (nor does it say that there is no uncertainty in that merge - only that the merging procedure used by UAH and RSS is adequate).

I'm super busy today but have this one on my phone.

Here's a reply from Spencer/Christy to Fu/Chedley et al posted here earlier. This is referring to the proposed DN/CWC corrections using the short lived NOAA-9 satellite. Mears et al 2011 was the original thinker behind this adjustment, applied to RSS initially, despite noted inconsistencies.

http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-12-00107.1

Abstract

Po-Chedley and Fu investigated the difference in the magnitude of global temperature trends generated from the Microwave Sounding Unit (MSU) for the midtroposphere (TMT, surface to about 75 hPa) between the University of Alabama in Huntsville (UAH) and Remote Sensing Systems (RSS). Their approach was to examine the magnitude of a noise-reduction coefficient of one short-lived satellite, NOAA-9, which differed from UAH and RSS. Using radiosonde comparisons over a 2-yr period, they calculated an adjustment to the UAH coefficient that, when applied to the UAH data, increased the UAH global TMT trend for 1979–2009 by +0.042 K decade−1, which then happens to agree with RSS’s TMT trend. In studying their analysis, the authors demonstrate 1) the adjustment calculated using radiosondes is inconclusive when errors are accounted for; 2) the adjustment was applied in a manner inconsistent with the UAH satellite merging strategy, creating a larger change than would be generated had the actual UAH methodology been followed; and 3) that trends of a similar product that uses the same UAH coefficient are essentially identical to UAH and RSS.

Link to comment
Share on other sites

Hadley updated for Feb.. UK met office prediction for 2015 of 0.64, issued in December, doesn't look bad although a little low so far.

 2013  0.440  0.476  0.385  0.435  0.520  0.481  0.516  0.529  0.529  0.485  0.628  0.506  0.492 2013     83     83     83     80     79     79     80     80     79     80     80     81 2014  0.508  0.305  0.548  0.658  0.596  0.620  0.544  0.666  0.592  0.620  0.487  0.632  0.564 2014     82     82     80     81     79     80     82     83     81     82     82     83 2015  0.686  0.664  0.000  0.000  0.000  0.000  0.000  0.000  0.000  0.000  0.000  0.000  0.674 2015     84     82      0      0      0      0      0      0      0      0      0      0
Link to comment
Share on other sites

The WWB is fading east of the dateline but the kelvin wave has really responded the last 2 months. Way too early to call it a lock but 2015 should easily beat 2014 global temperature record.

It is already a guarantee on the surface sets unless a massive volcanic explosion happens.

The big story will be uah.

It could end up like 0.02C below the yearly record but have crushed 18 months and more since the oceans are just to warm at the surface consistently.

qNOqVSg.jpg

zMxmnoi.jpg

Link to comment
Share on other sites

attachicon.gifUAH stupid curve fitting.pngHere's UAH with the curve that they used to use overlaid on top. They stopped using it ~3 years ago because they knew that it would soon lose its cyclical appearance- and now you can really see how silly that sin function was. I just overlaid a UAH graph from 2012 on top of the current graph and extended the curve.attachicon.gifUAH stupid curve fitting.png

Bold is complete speculation on your part.
Link to comment
Share on other sites

March UAH is in from Dr. Roy Spencer's blog...  Down from the last few months at +0.26 C.  I would think ENSO warming will have an impact soon....

 

 

The eventual magnitude of this El Nino event is still quite uncertain. If we remain weak intensity over the next 6-12 months, it's highly doubtful that the satellite datasets will be breaking global temperature records. If we can get to moderate, particularly high-end moderate, we might have a shot at the record. Given the first year occurrence of a weak el nino, statistics (at least since 1950) argue for the second year Nino peaking < 1.2c for the trimonthly. Small sample size, but I don't think we're looking at a terribly robust Nino at this juncture.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...