Jump to content
  • Member Statistics

    17,607
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

2013 Global Temperatures


The_Global_Warmer

Recommended Posts

It's too bad there's not more objectivity in the administration of the temperature data sets. With James Hansen running GISS and Spencer in charge of UAH, there is plenty of brilliant science but a lack of unbiased work.

 

GISS and UAH are both well publicized peer-reviewed work and there have yet to be major problems found with either (at least with the current version of UAH) that would suggest that either of them is inaccurate (in other words the true value lies outside their error bars). I believe both of them to be objective realistic representations of global temperature within our current means of measuring it accurately. 

Link to comment
Share on other sites

  • Replies 1.3k
  • Created
  • Last Reply

GISS and UAH are both well publicized peer-reviewed work and there have yet to be major problems found with either (at least with the current version of UAH) that would suggest that either of them is inaccurate (in other words the true value lies outside their error bars). I believe both of them to be objective realistic representations of global temperature within our current means of measuring it accurately. 

 

There was a recent paper which found a problem with the UAH.

 

http://www.eurekalert.org/pub_releases/2012-05/uow-nrb050712.php

 

ne popular climate record that shows a slower atmospheric warming trend than other studies contains a data calibration problem, and when the problem is corrected the results fall in line with other records and climate models, according to a new University of Washington study.

The finding is important because it helps confirm that models that simulate global warming agree with observations, said Stephen Po-Chedley, a UW graduate student in atmospheric sciences who wrote the paper with Qiang Fu, a UW professor of atmospheric sciences.

They identified a problem with the satellite temperature record put together by the University of Alabama in Huntsville. Researchers there were the first to release such a record, in 1989, and it has often been cited by climate change skeptics to cast doubt on models that show the impact of greenhouse gases on global warming.

In their paper, appearing this month in the American Meteorological Society's Journal of Atmospheric and Oceanic Technology, Po-Chedley and Fu examined the record from the researchers in Alabama along with satellite temperature records that were subsequently developed by the National Oceanic and Atmospheric Administration and Remote Sensing Systems.

Scientists like Po-Chedley and Fu have been studying the three records because each comes to a different conclusion.

"There's been a debate for many, many years about the different results but we didn't know which had a problem," Fu said. "This discovery reduces uncertainty, which is very important."

When they applied their correction to the Alabama-Huntsville climate record for a UW-derived tropospheric temperature measurement, it effectively eliminated differences with the other studies.

Scientists already had noticed that there were issues with the way the Alabama researchers handled data from NOAA-9, one satellite that collected temperature data for a short time in the mid-1980s. But Po-Chedley and Fu are the first to offer a calculation related to the NOAA-9 data for adjusting the Alabama findings, said Kevin Trenberth, a distinguished senior scientist at the National Center for Atmospheric Research.

"It should therefore make for a better record, as long as UAH accepts it," he said.

To come up with the correction, Po-Chedley and Fu closely examined the way the three teams interpreted readings from NOAA-9 and compared it to data collected from weather balloons about the temperature of the troposphere.

They found that the Alabama research incorrectly factors in the changing temperature of the NOAA-9 satellite itself and devised a method to estimate the impact on the Alabama trend.

Like how a baker might use an oven thermometer to gauge the true temperature of an oven and then adjust the oven dial accordingly, the researchers must adjust the temperature data collected by the satellites.

That's because the calibration of the instruments used to measure the Earth's temperature is different after the satellites are launched, and because the satellite readings are calibrated by the temperature of the satellite itself. The groups have each separately made their adjustments in part by comparing the satellite's data to that of other satellites in service at the same time.

Once Po-Chedley and Fu apply the correction, the Alabama-Huntsville record shows 0.21 F warming per decade in the tropics since 1979, instead of its previous finding of 0.13 F warming. Surface measurements show the temperature of Earth in the tropics has increased by about 0.21 F per decade.

The Remote Sensing Systems and NOAA reports continue to reflect warming of the troposphere that's close to the surface measurements, with warming of 0.26 F per decade and 0.33 F respectively.

The discrepancy among the records stems from challenges climate researchers face when using weather satellites to measure the temperature of the atmosphere. The records are a composite of over a dozen satellites launched since late 1978 that use microwaves to determine atmospheric temperature.

However, stitching together data collected by those satellites to discover how the climate has changed over time is a complicated matter. Other factors scientists must take into account include the satellite's drift over time and differences in the instruments used to measure atmospheric temperature on board each satellite.

The temperature reports look largely at the troposphere, which stretches from the surface of the earth to around 10 miles above it, where most weather occurs. Climate models show that this region of the atmosphere will warm considerably due to greenhouse gas emissions. In fact, scientists expect that in some areas, such as over the tropics, the troposphere will warm faster than the surface of the Earth.

The paper does not resolve all the discrepancies among the records, and researchers will continue to look at ways to reconcile those conflicts.

"It will be interesting to see how these differences are resolved in the coming years," Po-Chedley said.

 

###

 

The research was supported by the National Science Foundation and NOAA.

Link to comment
Share on other sites

How many different satellite measurements are there for temperature?

It always seems that after an improved system of measurements is launched, someone is always looking at discrepancies to match the tainted surface station network. Argo and now UAH.

Link to comment
Share on other sites

How many different satellite measurements are there for temperature? It always seems that after an improved system of measurements is launched, someone is always looking at discrepancies to match the tainted surface station network. Argo and now UAH.

 

Satellite temperature measurement has never been considered an "improvement." It has always been considered experimental and has had larger published error bars including UAH error bars published by Spencer and Christy. It has also required several major revisions that have drastically altered the temperature trend (again these revisions were accepted by Spencer and Christy because they were so obvious). There is far more "taint" associated with satellite drift and calibration between satellites when a new one replaces an old one. Only in the denier blogosphere are people convinced that UAH is some godly space age technology. In reality it is a bunch of measurements from a dozen or so satellite records that have been spliced together in a fairly arbitrary manner.

Link to comment
Share on other sites

How many different satellite measurements are there for temperature? It always seems that after an improved system of measurements is launched, someone is always looking at discrepancies to match the tainted surface station network. Argo and now UAH.

 

"Tainted surface station network" - you sure do love your conspiracies, but that's not the topic I'm writing about.  Instead I'm writing about your apparent ignorance of the differences between surface measurements and satellite measurements.

 

Surface measurements are direct temperature measurements.  Sensors at each of the collection points around the Earth actually measure and record the temperature data at their sites.  The raw data is often adjusted for various reasons, but the adjustments, and the reasons for them, are also recorded as the meta-data for the temperature record.

 

Satellites don't measure temperatures - they can't since they are traveling in the near-vacuum of earth orbit - so they remotely measure a proxy parameter for temperature and process that proxy data through models to convert the raw data to temperature values.  In the case of the AMSU sensors the proxy parameter is the microwave emission by oxygen atoms in the Earth's atmosphere.  Several bands of microwave radiation are measured by radiometers on board the constellation of Earth-sensing satellites and processed through three layers of models.

 

The first layer of modeling is the orbital dynamics of the satellite in order to know where on the Earth the data is from.  The basic orbital parameters are X, Y, Z, Velocity, Roll, Pitch, and Yaw.  Each of these values affect what the radiometer is looking at and, since ALL satellite orbits are unstable, none of these values is exact and all of them change with time.  Each Earth-sensing satellite will have its own orbital dynamics parameters.

 

The second layer of modeling is for the radiometers.  Like all sensors, radiometers are not perfect so they have to be calibrated after launch against some source with known characteristics.  In the case of satellite radiometers the calibration is done against surface temperature record from the ground stations.  Hence the term ground-truthing. (FYI - if the surface record were truly tainted then ALL of the satellite records would be equally tainted.  Unless, of course, UAH is part of the conspiracy and has an untainted surface record they use to calibrate the satellite data for their results.  Hmmm.)   Keep in mind that space is tough on sensors so it is a given that the radiometers degrade over time.  Sometimes slowly, sometimes quickly.  Often this degradation is noticed because one satellite's data begins to disagree with the data from other satellites.  The paper that Bluewave linked to above involves this sort of adjustment.  Ideally, adjustments can be made to the sensor data to keep it in service but eventually the sensor performance falls to the point that there is no confidence in its data.

 

Once the raw satellite microwave data has been adjusted for location, and adjusted again for sensor performance, it is processed through the third layer of modeling which is the physics-based relationship between temperature and oxygen microwave emission.  This model also takes into account the filtering effect of the upper atmosphere on the tropospheric emissions.  The accuracy of this model is determined by how well we understand atmospheric physics.  No model is perfect but this one is better than many.

 

Bottom line - the GISS surface temperature record is the product of hundreds (thousands?) of people directly collecting temperature data around the world with published methodology, and the UAH temperature record is produced by a small (ten or so) group at a single university processing proxy data through their proprietary models.  Honestly - which is more likely to have errors?

Link to comment
Share on other sites

There was a recent paper which found a problem with the UAH.

 

http://www.eurekalert.org/pub_releases/2012-05/uow-nrb050712.php

 

ne popular climate record that shows a slower atmospheric warming trend than other studies contains a data calibration problem, and when the problem is corrected the results fall in line with other records and climate models, according to a new University of Washington study.

The finding is important because it helps confirm that models that simulate global warming agree with observations, said Stephen Po-Chedley, a UW graduate student in atmospheric sciences who wrote the paper with Qiang Fu, a UW professor of atmospheric sciences.

They identified a problem with the satellite temperature record put together by the University of Alabama in Huntsville. Researchers there were the first to release such a record, in 1989, and it has often been cited by climate change skeptics to cast doubt on models that show the impact of greenhouse gases on global warming.

In their paper, appearing this month in the American Meteorological Society's Journal of Atmospheric and Oceanic Technology, Po-Chedley and Fu examined the record from the researchers in Alabama along with satellite temperature records that were subsequently developed by the National Oceanic and Atmospheric Administration and Remote Sensing Systems.

Scientists like Po-Chedley and Fu have been studying the three records because each comes to a different conclusion.

"There's been a debate for many, many years about the different results but we didn't know which had a problem," Fu said. "This discovery reduces uncertainty, which is very important."

When they applied their correction to the Alabama-Huntsville climate record for a UW-derived tropospheric temperature measurement, it effectively eliminated differences with the other studies.

Scientists already had noticed that there were issues with the way the Alabama researchers handled data from NOAA-9, one satellite that collected temperature data for a short time in the mid-1980s. But Po-Chedley and Fu are the first to offer a calculation related to the NOAA-9 data for adjusting the Alabama findings, said Kevin Trenberth, a distinguished senior scientist at the National Center for Atmospheric Research.

"It should therefore make for a better record, as long as UAH accepts it," he said.

To come up with the correction, Po-Chedley and Fu closely examined the way the three teams interpreted readings from NOAA-9 and compared it to data collected from weather balloons about the temperature of the troposphere.

They found that the Alabama research incorrectly factors in the changing temperature of the NOAA-9 satellite itself and devised a method to estimate the impact on the Alabama trend.

Like how a baker might use an oven thermometer to gauge the true temperature of an oven and then adjust the oven dial accordingly, the researchers must adjust the temperature data collected by the satellites.

That's because the calibration of the instruments used to measure the Earth's temperature is different after the satellites are launched, and because the satellite readings are calibrated by the temperature of the satellite itself. The groups have each separately made their adjustments in part by comparing the satellite's data to that of other satellites in service at the same time.

Once Po-Chedley and Fu apply the correction, the Alabama-Huntsville record shows 0.21 F warming per decade in the tropics since 1979, instead of its previous finding of 0.13 F warming. Surface measurements show the temperature of Earth in the tropics has increased by about 0.21 F per decade.

The Remote Sensing Systems and NOAA reports continue to reflect warming of the troposphere that's close to the surface measurements, with warming of 0.26 F per decade and 0.33 F respectively.

The discrepancy among the records stems from challenges climate researchers face when using weather satellites to measure the temperature of the atmosphere. The records are a composite of over a dozen satellites launched since late 1978 that use microwaves to determine atmospheric temperature.

However, stitching together data collected by those satellites to discover how the climate has changed over time is a complicated matter. Other factors scientists must take into account include the satellite's drift over time and differences in the instruments used to measure atmospheric temperature on board each satellite.

The temperature reports look largely at the troposphere, which stretches from the surface of the earth to around 10 miles above it, where most weather occurs. Climate models show that this region of the atmosphere will warm considerably due to greenhouse gas emissions. In fact, scientists expect that in some areas, such as over the tropics, the troposphere will warm faster than the surface of the Earth.

The paper does not resolve all the discrepancies among the records, and researchers will continue to look at ways to reconcile those conflicts.

"It will be interesting to see how these differences are resolved in the coming years," Po-Chedley said.

 

###

 

The research was supported by the National Science Foundation and NOAA.

 

This research was funded by NOAA - isn't that a bit of a conflict of interest?

Link to comment
Share on other sites

This research was funded by NOAA - isn't that a bit of a conflict of interest?

 

No. The mission of NOAA is to understand and predict changes in weather, climate, oceans and coasts. Their mission is not to advertise their work product or the work product of other governmental agencies and researchers or to dismiss the findings of other researchers without cause. Moreover, the paper is peer-reviewed.

 

I assume the findings are not considered a new gold-standard in satellite temperature detection, but rather yet another competing methodology of adjusting the very flawed raw data coming from satellites, with evidence both for and against it. 

 

I haven't read the paper though, so perhaps the results are more conclusive than that. If they were, I'd assume I would have heard more about it by now, but perhaps not.

 

When Spencer and/or Christy publish their critiques (sometimes very flawed critiques) of other methodologies (and often published in very lax journals) do you exhibit the same degree of skepticism due to the conflict of interest?

Link to comment
Share on other sites

No. The mission of NOAA is to understand and predict changes in weather, climate, oceans and coasts. Their mission is not to advertise their work product or the work product of other governmental agencies and researchers or to dismiss the findings of other researchers without cause. Moreover, the paper is peer-reviewed.

 

Telling me NOAA's mission statement does not answer the question.

 

And peer-reviewed does not equal gospel.

 

I'm not saying the paper is invalid. But there is no reason to ignore the fact that it is addressing NOAA records and competing records, and is also funded by NOAA.

Link to comment
Share on other sites

Telling me NOAA's mission statement does not answer the question.

 

And peer-reviewed does not equal gospel.

 

I'm not saying the paper is invalid. But there is no reason to ignore the fact that it is addressing NOAA records and competing records, and is also funded by NOAA.

 

It does answer the question. If NOAAs mission is science then there is no conflict of interest when they assess their own work or the work of others. This is contrary to say commercially funded science (say by the oil industry or the pharmaceutical industry) where the objective is not science, but rather making money. I doubt that as an institution that NOAA would object to a major revision or fault being pointed out in their work. If it was a legitimate fault, they would likely work to rectify it as quickly as possible.

 

Yes people can be personally biased towards their own work product, but the authors of the paper are not the same ones who came up with the NOAA temperature record. Moreover, the paper is peer-reviewed which doesn't make it gospel, but provides assurance that others have assessed the work and found that the evidence presented supports the conclusions of the authors and that others have had the chance to respond and find fault with the reasoning. 

Link to comment
Share on other sites

It does answer the question. If NOAAs mission is science then there is no conflict of interest when they assess their own work or the work of others. This is contrary to say commercially funded science (say by the oil industry or the pharmaceutical industry) where the objective is not science, but rather making money. I doubt that as an institution that NOAA would object to a major revision or fault being pointed out in their work. If it was a legitimate fault, they would likely work to rectify it as quickly as possible.

 

Yes people can be personally biased towards their own work product, but the authors of the paper are not the same ones who came up with the NOAA temperature record. Moreover, the paper is peer-reviewed which doesn't make it gospel, but provides assurance that others have assessed the work and found that the evidence presented supports the conclusions of the authors and that others have had the chance to respond and find fault with the reasoning. 

 

Eh, I'm not concerned about it, but I think conflicts of interest are still possible in purely research/science funding relationships. I'll just leave it at that.

Link to comment
Share on other sites

Warmth continues to slowly build...

image.jpg

Antarctica looks to be the biggest reversal on the map; it was cold there earlier in the year but is now a complete torch.

I would expect the warmth to continue as SSTs eventually cool due to an emerging La Nina and release the incredible amount of heat they had accumulated.

Link to comment
Share on other sites

Antarctica looks to be the biggest reversal on the map; it was cold there earlier in the year but is now a complete torch.

I would expect the warmth to continue as SSTs eventually cool due to an emerging La Nina and release the incredible amount of heat they had accumulated.

 

emerging La Nina? 3.4 is barely below average at the moment and is forecast to turn positive due to the high OHC. The CPC forecasts ENSO neutral through fall 2013. The Cfsv2 (shown below) actually forecasts a weak El Nino.

 

nino34Mon.gif

Link to comment
Share on other sites

emerging La Nina? 3.4 is barely below average at the moment and is forecast to turn positive due to the high OHC. The CPC forecasts ENSO neutral through fall 2013. The Cfsv2 (shown below) actually forecasts a weak El Nino.

 

 

 

 

The CFS has been biased high almost all spring in summer in forecasting ENSO. In fact, most dynamical models have been. They generally are pretty bad at forecasting it from more than a month or two out.

 

 

Until this source shows more defined subsurface anomalies, I think there's no reason to forecast anything other than neutral:

 

Dep_Sec_EQ_5d.gif

 

 

 

 

 

 

If anything, I'd lan weak La Nina over El Nino at this point given the trade wind forecast, but I think neutral is the overwhelming favorite.

Link to comment
Share on other sites

emerging La Nina? 3.4 is barely below average at the moment and is forecast to turn positive due to the high OHC. The CPC forecasts ENSO neutral through fall 2013. The Cfsv2 (shown below) actually forecasts a weak El Nino.

 

 

This is a classic emerging La Niña structure with the extreme cold near the South American coastline and a significant reversal of the warm anomalies near the California coastline into a decent cold pool that stretches southwest back towards the equator. You can really see the -ENSO/-PDO decadal pattern returning after the summer had a slightly different regime:

post-475-0-35614600-1376444328_thumb.gif

 

There's also been some cooling in the Indian Ocean, and I wouldn't be surprised to see the Gulf of Alaska cool while the Aleutians SSTs warm to get back to a more classic -PDO configuration. Just for the record, I think we'll see a weak Niña or cold-neutral winter. Not sure why you're looking at the CFS instead of listening to a lot of the top meteorologists on here who are clearly honking cold ENSO. Looking at SST maps also shows a lot of the classic features of Niña: extreme cold bubble near South American coastline, upwelling from California southwest towards the equator, cooling in the Indian Ocean, warming in the Western North Pacific, etc..I've been following them closely for the last couple of months and am fairly impressed with the reversal. 

Link to comment
Share on other sites

GISS Data for July:

 

July 2013 anomaly: +0.54°C (tied 10th warmest)

July 2013 vs. 1981-2010 baseline: +0.166°C

July 2013 vs. 30-Year moving average: +0.126°C

 

January-July 2013 anomaly: +0.564°C (8th warmest)

January-July 2013 vs. 1981-2010 baseline: +0.154°C

January-July 2013 vs. 30-Year moving average: +0.122°C

Link to comment
Share on other sites

GISS Data for July:

 

July 2013 anomaly: +0.54°C (tied 10th warmest)

July 2013 vs. 1981-2010 baseline: +0.166°C

July 2013 vs. 30-Year moving average: +0.126°C

 

January-July 2013 anomaly: +0.564°C (8th warmest)

January-July 2013 vs. 1981-2010 baseline: +0.154°C

January-July 2013 vs. 30-Year moving average: +0.122°C

 

 

That's definitely a bit higher than I expected based on the weatherbell maps...I was thinking more around +0.47, though I'm sure this prelim number will be revised.

 

They did a bunch of revisions in this current update.

 

 

June has been revised downward from +0.67 to +0.66

April has been revised upwards from +0.47 to +0.48 (after being revised from the initial +0.51 to +0.47 earlier)

March has been revised downward from +0.59 to +0.58

February has been revised downward form +0.52 to +0.50 (but after it was revised upward from its prelim +0.49)

January has been revised upward from +0.61 to +0.63

Link to comment
Share on other sites

That's definitely a bit higher than I expected based on the weatherbell maps...I was thinking more around +0.47, though I'm sure this prelim number will be revised.

 

They did a bunch of revisions in this current update.

 

 

June has been revised downward from +0.67 to +0.66

April has been revised upwards from +0.47 to +0.48 (after being revised from the initial +0.51 to +0.47 earlier)

March has been revised downward from +0.59 to +0.58

February has been revised downward form +0.52 to +0.50 (but after it was revised upward from its prelim +0.49)

January has been revised upward from +0.61 to +0.63

I'm starting to wonder if the CFSv2 weatherbell method of correctly predicting global temperatures on a monthly scale is still viable.  There has been quite a spread at times the last few months.

Link to comment
Share on other sites

I'm starting to wonder if the CFSv2 weatherbell method of correctly predicting global temperatures on a monthly scale is still viable.  There has been quite a spread at times the last few months.

 

 

The spread still has not exceeded the maximum of +0.64C from the previous 40 months of observation. The lower bound was +0.37C spread. But 75% of the months fell between +0.50 and +0.60...while that is a good percentage, that still leaves an average of 3 months per year that will not fall within that range.

 

 

You also have to consider that the newer months are prelim numbers and they will change. The observations from the past are numbers that are more set in stone (but still occasionally are revised themselves). GISS does a lot more revisions than people think.

Link to comment
Share on other sites

Here are the GISS vs weatherbell spreads in 2013:

 

 

January +0.55

February: +0.50

March: +0.56

April: +0.52

May: +0.58

June: +0.61

July: +0.63

 

 

So we've had 5 out of 7 months in 2013 fall within the 0.50-0.60 range so far...June will be included in that if it is revised downward any further by any magnitude. Even if it is not, there is still nothing out of the ordinary going on using weatherbell maps as a predictor versus the previous 3-4 years of data as 5/7 is 71-72%. Only if you are alarmed by consecutive months being greater than +0.60. Too small of a sample for me personally to consider anything amiss. Especially given that the two most recent months are the most likely to be revised as the data continues to trickle in from ground stations.

Link to comment
Share on other sites

Here are the GISS vs weatherbell spreads in 2013:

 

 

January +0.55

February: +0.50

March: +0.56

April: +0.52

May: +0.58

June: +0.61

July: +0.63

 

 

So we've had 5 out of 7 months in 2013 fall within the 0.50-0.60 range so far...June will be included in that if it is revised downward any further by any magnitude. Even if it is not, there is still nothing out of the ordinary going on using weatherbell maps as a predictor versus the previous 3-4 years of data as 5/7 is 71-72%. Only if you are alarmed by consecutive months being greater than +0.60. Too small of a sample for me personally to consider anything amiss. Especially given that the two most recent months are the most likely to be revised as the data continues to trickle in from ground stations.

That is a fair point.  However, I wonder how well they correlate outside of just the absolute difference (I'll try and complete an analysis later).  I honestly thought the weatherbell to GISS method was just dumb luck, but perhaps the stats are more solid than I thought.  Thanks for the clarification.

Link to comment
Share on other sites

The slow build continues up another little bit... We are headed to a very warm monthly reading.

Antarctica is killing us this month.... I would have never believed the arctic would be below normal and Antarctica would be above normal for our summer period. Big shocker.

Link to comment
Share on other sites

 

donsutherland1, on 14 Aug 2013 - 09:39 AM, said:snapback.png
 

GISS Data for July:

 

July 2013 anomaly: +0.54°C (tied 10th warmest)

July 2013 vs. 1981-2010 baseline: +0.166°C

July 2013 vs. 30-Year moving average: +0.126°C

 

January-July 2013 anomaly: +0.564°C (8th warmest)

January-July 2013 vs. 1981-2010 baseline: +0.154°C

January-July 2013 vs. 30-Year moving average: +0.122°C

 

 

 

 

 

If GISS averages 55/Month the rest of the year it finishes at .558(56)

If  GISS averages 60 monthly anomaly the rest of the year the yearly temperature anomaly will be .579(58)

If GISS averages 65/Month the rest of the way the yearly anomaly is .60

If Giss averages 70/Month the rest of the way the yearly anomaly is .62

 

 

 

So far for August CFS is at .133C.  ORH and Skiers equation would put that at .683C(.633-.733C).  .68 is the actual GISS all time August record set in 2011 and 1998.

 

 

 

9gPyB71.png?1nmaps_zpsd7665236.gif?t=1376494965

 

 

R1X3Tr6.png?1?2637

Link to comment
Share on other sites

 

donsutherland1, on 14 Aug 2013 - 09:39 AM, said:

 

GISS Data for July:

 

July 2013 anomaly: +0.54°C (tied 10th warmest)

July 2013 vs. 1981-2010 baseline: +0.166°C

July 2013 vs. 30-Year moving average: +0.126°C

 

January-July 2013 anomaly: +0.564°C (8th warmest)

January-July 2013 vs. 1981-2010 baseline: +0.154°C

January-July 2013 vs. 30-Year moving average: +0.122°C

 

 

 

 

 

If GISS averages 55/Month the rest of the year it finishes at .558(56)

If  GISS averages 60 monthly anomaly the rest of the year the yearly temperature anomaly will be .579(58)

If GISS averages 65/Month the rest of the way the yearly anomaly is .60

If Giss averages 70/Month the rest of the way the yearly anomaly is .62

 

 

 

So far for August CFS is at .133C.  ORH and Skiers equation would put that at .683C(.633-.733C).  .68 is the actual GISS all time August record set in 2011 and 1998.

 

 

 

 

 

Why on earth does that chart us 1950-1980? Is that to capture the maximum aerosol era for the greatest anomaly? I have seen that graphic before and never asked, but who uses that timeframe anymore?

Link to comment
Share on other sites

Pretty telling. Is AGW disproved? Of course not. Are we warming as fast as previous projections given CO2 emissions? Clearly, no. Does this make future projections more questionable? It has to. In fact, warming projections by the "experts" have been steadily downgraded over the past decade or so. Catastrophic predictions of climate change should be taken with a grain of salt, given the track record so far.

 

 

post-558-0-85895200-1376596599_thumb.png

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...