Jump to content
  • Member Statistics

    17,606
    Total Members
    7,904
    Most Online
    ArlyDude
    Newest Member
    ArlyDude
    Joined

OCTOBER PATTERN INDEX (OPI) MONITORING WINTER SEASON 2014-2015


Recommended Posts

Your use of "Curve fitting" implies bias...no one knows what goes in the black box equation that converts October 500mb heights to the OPI yet, but you can be sure it was somehow mathematically manipulated to standardize and scale the data to fit with the AO values...all necessary to test for any LINEAR relationship between two variables..as long as the math was the same for each year the OPI was calculated..

I think METS don't like this index at first glance in the same way that MD's hate multiple regression calculators that are becoming rampant in medicine. Life and death decisions are made based on the basis of multiple regression calculators taking in as many as 20 to 30 variables and R values in some cases as low as .5 or .6...but in practice they work better than the guesstimates and the experience based judgement of the MD. The data are what the data are. If something seemingly so complex could be predicted with such precision by something so simple, it is threatening to those that have spent time to get the letters behind their name..

I see posts all the time where someone posts a samples size of 4 or 5 or 6 with temps or snowfall or tele values and throws around the word "correlation" and no one calls them out...despite the fact the sample size is much less than the n of 30 where natural variabiility fades and no R value is calculated..and here comes solid science 2 variables with an n of 36 and R2 of .83 and everyone tries to poke holes in it before its published..

 

The bolded is anectodal, a vague generalization, and a red herring, so... yeah.

 

The statements of caution have come in two flavors:

1. The methodology is unpublished, so any possible flaws in calculation are unknowable.

2. The first forecast test of the method was also its biggest "failure", though only just. This hints (but does not prove) that the method may have been statistically determined to give the highest correlation value for the years it was "trained" on, which tends to mean that the actual future predictive correlation value will likely be lower. How much so is anyone's guess.

 

Notice we are not claiming that the OPI is useless or rigged in some way (that is a straw man), but that one might take caution in claiming with confidence that "the OPI can predict 83% of the variability associated with the wintertime AO"... that's statistically true for the period it was trained over, but I and others suspect that some of that predictability is statistically inflated for the reasons given above.

Link to comment
Share on other sites

  • Replies 132
  • Created
  • Last Reply

We will have to wait until it is published to get the answers to those questions.

 

One thing I always become concerned about with very high correlations is "curve fitting". Now I'm not saying that was done here, but until the details are presented in a paper, then there will always be some level of skepticism.

 

I think there is too much stock being placed into finding one-to-one correlations and looking at any one-to-one correlations.  The atmosphere doesn't work one-to-one and there are numerous variables which control the pattern...extreme phases of one pattern may dictate the overall look but the pattern is shaped by numerous variables.  

Link to comment
Share on other sites

The bolded is anectodal, a vague generalization, and a red herring, so... yeah.

 

The statements of caution have come in two flavors:

1. The methodology is unpublished, so any possible flaws in calculation are unknowable.

2. The first forecast test of the method was also its biggest "failure", though only just. This hints (but does not prove) that the method may have been statistically determined to give the highest correlation value for the years it was "trained" on, which tends to mean that the actual future predictive correlation value will likely be lower. How much so is anyone's guess.

 

Notice we are not claiming that the OPI is useless or rigged in some way (that is a straw man), but that one might take caution in claiming with confidence that "the OPI can predict 83% of the variability associated with the wintertime AO"... that's statistically true for the period it was trained over, but I and others suspect that some of that predictability is statistically inflated for the reasons given above.

 

Not only that, some of us who are bringing up the questions are probably near the top of the list on this forum when it comes to using statistics in forecasting/meteorology. Quite the opposite of the implied "damned those computers and statistics, we can do it better ourselves" mindset.

 

I also think heavy_wx brings up a good point...why is the data only to 1976 when it is using 5H heights? That data is readily available back another 27 years.

Link to comment
Share on other sites

I also think heavy_wx brings up a good point...why is the data only to 1976 when it is using 5H heights? That data is readily available back another 27 years.

 

I thought it was a good point too.  My guess though is that their focus was on trying to compare themselves against Cohen's SCE and SAI work which went back to the early-mid 70's snow cover data.  I know one of their claims last year was where they showed examples of years in which the OPI outperformed the SCE/SAI (e.g. SAI favored a -AO, but the OPI did not, and the OPI was more correct, and vice versa).

Link to comment
Share on other sites

My analogy to medical risk calculators is this...and not anecdotal at all..those correlations are revolutionizing disease state by disease state..

In medicine, In the end, after taking all the variables and the correlation into account, what you end up with is dividing the resulting range out into quartile or quintiles of the range for the predictive effect of what will happen to the patient...

So to define OPI at -1.4 vs AO as a significant bust, when the predictive power as it relates to what you are hoping to forecast ( NA temps, snowfall, precip) is undefined etc is premature...

I have seen snowfall and temp correlation maps that analyze + and - AO, and > -1 vs < -1, but don't remember any showing that > 1.0 vs < 1.0 is I significant difference in predictive power of NA winter temps, snowfall, precip..

In other words there is not a basis point for basis point correlation between AO and DJF temps, snowfall precip.

So show me a map with correlations of >1.0 vs <1.0 and DJF NA temps snowfall precip and you can define variances of 1.2-1.4 as busts...

Link to comment
Share on other sites

My analogy to medical risk calculators is this...and not anecdotal at all..those correlations are revolutionizing disease state by disease state..

In medicine, In the end, after taking all the variables and the correlation into account, what you end up with is dividing the resulting range out into quartile or quintiles of the range for the predictive effect of what will happen to the patient...

So to define OPI at -1.4 vs AO as a significant bust, when the predictive power as it relates to what you are hoping to forecast ( NA temps, snowfall, precip) is undefined etc is premature...

I have seen snowfall and temp correlation maps that analyze + and - AO, and > -1 vs < -1, but don't remember any showing that > 1.0 vs < 1.0 is I significant difference in predictive power of NA winter temps, snowfall, precip..

In other words there is not a basis point for basis point correlation between AO and DJF temps, snowfall precip.

So show me a map with correlations of >1.0 vs <1.0 and DJF NA temps snowfall precip and you can define variances of 1.2-1.4 as busts...

 

What on Earth are you talking about?

Link to comment
Share on other sites

What on Earth are you talking about?

My point is that in the correlation chain of:

Oct 500mb Heights = OPI = 0.83 AO = ??? DJF SCE/Snowfall/Temp

each variable has level of significance relative to the next step it is predicting...

So in other words is a variance of OPI minus AO of +/-0.01 significant in predicting DJF SCE/Snowfall/Temp?

Is +/- .1 a significant bust?

Is +/- 1.4 significant?

What magnitude of variance is truly a "bust" in predictive ability?

The data below give an example....

The correlation chain is Oct 500mb = OPI = (0.83 AO) = (-0.37 Concurrent SCE in North America)

The data on the lead lag actually support that AO leads NA SCE anomalies by about 1 week.....and that with AO >1, SCE were about normal, and with AO <1, SCE were about 1SD above normal. so if you were trying to predict JF SCE anomalies from the OPI, last years OPI of 1.64 misled you by predicting a JF AO that would have you expecting normal SCE anomalies...when the actual AO was .2 and so expected SCE was anomalously high.

So i answered my own question...a -1.4 bust in 2013-2014 WAS a significant bust in that it lead to an incorrect prediction of JF SCE anomalies in North America. I misread this paper and thought it was looking at anomolously high AO of +1 and anomolously low of AO -1 with neutral values excluded...but the paper was looking at AO of < or > 1...so for JF SCE the OPI was a significant bust.

post-8551-0-31593900-1413479079_thumb.jp

post-8551-0-38828300-1413480129_thumb.jp

Link to comment
Share on other sites

My point is that in the correlation chain of:

Oct 500mb Heights = OPI = 0.83 AO = ??? DJF SCE/Snowfall/Temp

each variable has level of significance relative to the next step it is predicting...

So in other words is a variance of OPI minus AO of +/-0.01 significant in predicting DJF SCE/Snowfall/Temp?

Is +/- .1 a significant bust?

Is +/- 1.4 significant?

What magnitude of variance is truly a "bust" in predictive ability?

The data below give an example....

The correlation chain is Oct 500mb = OPI = (0.83 AO) = (-0.37 Concurrent SCE in North America)

The data on the lead lag actually support that AO leads NA SCE anomalies by about 1 week.....and that with AO >1, SCE were about normal, and with AO <1, SCE were about 1SD above normal. so if you were trying to predict JF SCE anomalies from the OPI, last years OPI of 1.64 misled you by predicting a JF AO that would have you expecting normal SCE anomalies...when the actual AO was .2 and so expected SCE was anomalously high.

So i answered my own question...a -1.4 bust in 2013-2014 WAS a significant bust in that it lead to an incorrect prediction of JF SCE anomalies in North America. I misread this paper and thought it was looking at anomolously high AO of +1 and anomolously low of AO -1 with neutral values excluded...but the paper was looking at AO of < or > 1...so for JF SCE the OPI was a significant bust.

 

 

The OPI's performance in this thread was being measured only against the AO index (since that is the information we were given)...not what the snow extent in North America was during the winter.

 

The OPI predicted a very positive AO for DJF. The verification was a neutral AO. This was a bust.

Link to comment
Share on other sites

Okay so if OPI this year is 1.01 and AO is .49 this year...is that a bust or not? That's a variance of .52...within approx .5 SD and 8% of the range..but like last year OPI predicted positive but AO is neutral..bust or not? Correlation of .83 says it should be within 17% right?

And there is a comment further up about snow and whether or not to buy snowplow stocks..(pbweather)

Link to comment
Share on other sites

Okay so if OPI this year is 1.01 and AO is .49 this year...is that a bust or not? That's a variance of .52...within approx .5 SD and 8% of the range..but like last year OPI predicted positive but AO is neutral..bust or not? Correlation of .83 says it should be within 17% right?

And there is a comment further up about snow and whether or not to buy snowplow stocks..(pbweather)

 

 

1 vs 0.5 is not a terrible forecast. Last year's 1.6 vs 0.2 is though. The latter is not useful from a forecasting perspective.

 

Nobody is saying one bad year discredits the method. What I (and some others) are arguing is that the correlation may not be representative of the OPI's forecasting ability if it was curve-fitted to some extent to match the past observations. Having it bust by the largest amount on its dataset in the first year after it was introduced only increases the skepticism that curve-fitting boosted the correlation. More large busts going forward would be the behavior of an index that was curve-fitted and not really as skillful of a predictor as claimed.

 

This is why we'll need to see the published version of this index to erase some of the doubt and see the physical mechanism behind the index and the winter pattern...much like we originally saw with snow cover extent.

Link to comment
Share on other sites

I covered exactly why that that NAO indicator busted in my blog, it was due to the intense North Pacific SST pattern that effect the centers of action of the East Pacific Oscillation and Tropical Northern Hemisphere Pattern which convuluted the jet-pattern and NAO/AO state. No one had a super indicator for the winter of 2013-2014. If this North Pacific Anomaly shows up again for the Winter 2014-2015, these indices could fail again. Note: Record September 2014 SST Temps over the area covering 70N-30N & 180W-130W, according to reanalysis, this october a few cyclones over the area have upwelled cooler water, but I bet October still ends up in Top 5 or Top 3 warmest for the bounded area.

 

http://wxmidwest.blogspot.com/2014/10/the-experimental-winter-djf-2014-15.html

 

There is a reason for the OPI not working last year, my indicator not working last year, and Cohen's indicators diverging from each other last year.

 

1. Extreme NP pattern, as described in my blog.

 

2. Using an exact temporal bounds for snow advancement (meaning late september and Early november matter) as DT described.

Link to comment
Share on other sites

I covered exactly why that that NAO indicator busted in my blog, it was due to the intense North Pacific SST pattern that effect the centers of action of the East Pacific Oscillation and Tropical Northern Hemisphere Pattern which convuluted the jet-pattern and NAO/AO state. No one had a super indicator for the winter of 2013-2014. If this North Pacific Anomaly shows up again for the Winter 2014-2015, these indices could fail again. Note: Record September 2014 SST Temps over the area covering 70N-30N & 180W-130W, according to reanalysis, this october a few cyclones over the area have upwelled cooler water, but I bet October still ends up in Top 5 or Top 3 warmest for the bounded area.

 

http://wxmidwest.blogspot.com/2014/10/the-experimental-winter-djf-2014-15.html

 

There is a reason for the OPI not working last year, my indicator not working last year, and Cohen's indicators diverging from each other last year.

 

1. Extreme NP pattern, as described in my blog.

 

2. Using an exact temporal bounds for snow advancement (meaning late september and Early november matter) as DT described.

 

I understand where you're coming from but I've come around to believe what occurred in the N Pac (NE Pac in particular) was more of a byproduct of a persistent pattern than a driver. Mid Oct 2013 was when the anomalous -EPO started showing. The ne pac was solidly below normal when it began. But as it persisted the sst's responded accordingly. Was there some sort of feedback helping sustain the pattern once it got going? I'll defer. 

 

Another question I have about the npac is whether or not "warm" anomalies can cause a strong atmospheric response. It's relative warmth. Does the difference between 50-55 or 45-50 degree ocean water have enough influence to drive height/circulation patterns? I'll defer on that as well. 

Link to comment
Share on other sites

I understand where you're coming from but I've come around to believe what occurred in the N Pac (NE Pac in particular) was more of a byproduct of a persistent pattern than a driver. Mid Oct 2013 was when the anomalous -EPO started showing. The ne pac was solidly below normal when it began. But as it persisted the sst's responded accordingly. Was there some sort of feedback helping sustain the pattern once it got going? I'll defer. 

 

Another question I have about the npac is whether or not "warm" anomalies can cause a strong atmospheric response. It's relative warmth. Does the difference between 50-55 or 45-50 degree ocean water have enough influence to drive height/circulation patterns? I'll defer on that as well. 

 

 

In the absence of a powerful driver, the SSTs can act as a feedback. With higher SSTs, you'll have higher evaporation and more latent heat release in the upper levels which raise heights...it can't offset something like strong tropical forcing, but if other signals are weak, then it can help keep the pattern in place.

Link to comment
Share on other sites

In the absence of a powerful driver, the SSTs can act as a feedback. With higher SSTs, you'll have higher evaporation and more latent heat release in the upper levels which raise heights...it can't offset something like strong tropical forcing, but if other signals are weak, then it can help keep the pattern in place.

 

Thanks, ORH. It's interesting what's happening this month. Very quick reversal of the anomalies and likely to continue (especially in the epac) for the remainder of the month. Pretty good reminder that SSTAs are volatile and easy to move when there is a strong driver. Luckily, the byproduct so far is a pretty nice +pdo look building. 

Link to comment
Share on other sites

In the absence of a powerful driver, the SSTs can act as a feedback. With higher SSTs, you'll have higher evaporation and more latent heat release in the upper levels which raise heights...it can't offset something like strong tropical forcing, but if other signals are weak, then it can help keep the pattern in place.

 

ORH,

 SoC said that he felt that warmer SST's cause lower surface pressure. Would that jibe with higher heights? Doesn't lower pressure sort of jibe more with upper troughs? Just trying to generate discussion.

Link to comment
Share on other sites

I understand where you're coming from but I've come around to believe what occurred in the N Pac (NE Pac in particular) was more of a byproduct of a persistent pattern than a driver. Mid Oct 2013 was when the anomalous -EPO started showing. The ne pac was solidly below normal when it began. But as it persisted the sst's responded accordingly. Was there some sort of feedback helping sustain the pattern once it got going? I'll defer. 

 

Another question I have about the npac is whether or not "warm" anomalies can cause a strong atmospheric response. It's relative warmth. Does the difference between 50-55 or 45-50 degree ocean water have enough influence to drive height/circulation patterns? I'll defer on that as well. 

Actually, there is a paper being written about this past winter where I'm the head author along with 2 climatologists and professor that actually gets into that, Stay Tuned.

Link to comment
Share on other sites

We also need to understand these SSTA in the NPAC had been record high since 1948 (reanaylsis, Kalnay el al 1996) during some of the months and even since late 1800's (20th century dataset). ORH is right, there are relationships that Namias covered in many of his papers with bottom-top forcing mechnisms and feedback, some which act much more forcefully when they are 2,3,4 sigma above normal..

Link to comment
Share on other sites

ORH,

 SoC said that he felt that warmer SST's cause lower surface pressure. Would that jibe with higher heights? Doesn't lower pressure sort of jibe more with upper troughs? Just trying to generate discussion.

 

 

I would think that is minor compared to the increased latent heat. You'd need to show that a higher SST anomaly starts producing deep cyclones rather than just slightly lower pressures at the surface. I don't know the exact magnitudes of all the responses, but usually heat release from the ocean is going to be much larger than something like a sfc pressure response...at least as I would understand it.

Link to comment
Share on other sites

Actually, there is a paper being written about this past winter where I'm the head author along with 2 climatologists and professor that actually gets into that, Stay Tuned.

 

Looking forward to the read for sure. I loved last winter because it was non stop interesting and fun. Unfortunately patterns like last winter are quite infrequent. I wasn't living on the east coast the last time.  Hopefully I'm still around if it happens again. 

 

Any idea what happened in 06? Oct was in the top 10 of both eurasian extent and Sept-Oct increase but had a raging +AO for DJ. Don't want a redux considering we are moving towards similar enso and sce conditions. 

Link to comment
Share on other sites

No it   did  not  it..  I forecasted a POSITIVE nao ALL winter long 

 dude I have been  doing this loooong time  I have   had  good  seasonal forecasts ... so - s oens and and bad ones. 

yours was the most inept mindless pile of  cow dung I have  ever seen.  
Even worse  at No point  did it occur  to you that you MIGHT be wrong 

you need to find another  hobby 

 

 really . You have no clue 
 

The NAO forecast you endorsed in your winter outlook last year busted even worse.

Link to comment
Share on other sites

100% correct.
Al and I talked  about all last winter and spring 


 

I covered exactly why that that NAO indicator busted in my blog, it was due to the intense North Pacific SST pattern that effect the centers of action of the East Pacific Oscillation and Tropical Northern Hemisphere Pattern which convuluted the jet-pattern and NAO/AO state. No one had a super indicator for the winter of 2013-2014. If this North Pacific Anomaly shows up again for the Winter 2014-2015, these indices could fail again. Note: Record September 2014 SST Temps over the area covering 70N-30N & 180W-130W, according to reanalysis, this october a few cyclones over the area have upwelled cooler water, but I bet October still ends up in Top 5 or Top 3 warmest for the bounded area.

 

http://wxmidwest.blogspot.com/2014/10/the-experimental-winter-djf-2014-15.html

 

There is a reason for the OPI not working last year, my indicator not working last year, and Cohen's indicators diverging from each other last year.

 

1. Extreme NP pattern, as described in my blog.

 

2. Using an exact temporal bounds for snow advancement (meaning late september and Early november matter) as DT described.

Link to comment
Share on other sites

 100% correct    ..  this why I  was  a  loud Voice of  opposition against the OPI when it came out last year

 

You can't give a break to an index based upon perceived notions of anomalous weather patterns. How exactly would we define "normal" winter? One could make an argument that each and every winter is "extraordinary" in a particular way. The most objective way of examining it is simply testing the validity through successive years of observation: do the resultant AO values mirror closely what was forecasted or not? Last year was the first observational period, and the modality ended up being correct but the magnitude fell short significantly. We've got to include all years, even the perceived "abnormal" ones, into the observation basket, otherwise there's no legitimate correlation. This new research definitely sounds promising and it's quite possible last year was just one of the "miss" years. But we're going to need to experience many winters to verify this (and of course reading the paper when its released would be helpful).

Link to comment
Share on other sites

I guess I don't get how curve fitting applies to a non continuous discrete function where one years value doesn't influence the next years value.....but I agree that equation itself and the input parameters such as axis and ellypticization which seem to be original values from OPI authors work needs scrutiny..although if it busts again it might be a moot point..

1 vs 0.5 is not a terrible forecast. Last year's 1.6 vs 0.2 is though. The latter is not useful from a forecasting perspective.

 

Nobody is saying one bad year discredits the method. What I (and some others) are arguing is that the correlation may not be representative of the OPI's forecasting ability if it was curve-fitted to some extent to match the past observations. Having it bust by the largest amount on its dataset in the first year after it was introduced only increases the skepticism that curve-fitting boosted the correlation. More large busts going forward would be the behavior of an index that was curve-fitted and not really as skillful of a predictor as claimed.

 

This is why we'll need to see the published version of this index to erase some of the doubt and see the physical mechanism behind the index and the winter pattern...much like we originally saw with snow cover extent.

Link to comment
Share on other sites

I would think that is minor compared to the increased latent heat. You'd need to show that a higher SST anomaly starts producing deep cyclones rather than just slightly lower pressures at the surface. I don't know the exact magnitudes of all the responses, but usually heat release from the ocean is going to be much larger than something like a sfc pressure response...at least as I would understand it.

I think it depends on the altitude you're looking at. When looking at dynamics like ENSO SSTs it's easy to see how the coupled mechanisms govern the Walker Cell. So I don't see how or why the underlying physics should work any differently over the N-PAC?

The trade winds exist largely because the W-PAC is warmer than the E-PAC on the macroscale. There's enhanced lift/a strong Hadley Cell in the W-PAC resulting in reduced surface pressure, visa versa in the E-PAC, thus your Walker Cell. I think the geopotential-derived feedback you're referring to are based in the upper troposphere and are dependent on a lot of other factors.

Link to comment
Share on other sites

Must be back up... -2.96 with 18 days real data 10 GFS

 

Per "what is OPI" link, the value starts stabilizing around now. Given that it still is missing the last few days of the month which are likely to be +AO and thus + values, the index itself should stabilize somewhere around -2.5.

 

That's about halfway between 2012 and 2009.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...