Jump to content
  • Member Statistics

    17,609
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

February 2016 Forecasts/Disco/Obs


snowman19

Recommended Posts

Wave 1 is interesting. Could be some snow for the area especially north and west. Para has it tracking over SNJ. EPS has it tracking under NYC.

 

I don't see that being much of an issue, I think it ultimately will trend north.  I don't like SWFE type stems here when there is ridging out west, they usually tend to amplify further in the short range and end up further north

Link to comment
Share on other sites

  • Replies 1.1k
  • Created
  • Last Reply

My concern about the possible February 24-25 storm is that blocking will not yet be in place during the event, even as it's developing. The latest GFS ensemble forecast has most members still showing a positive AO during the timeframe in question. While some significant snowfalls have occurred with an AO+ during the second half of February, 78% of NYC's 6" or greater snowstorms during that period occurred with an AO-.

 

AO02182016.jpg

 

There's still a good degree of uncertainty. First, it is possible that the AO dives more quickly, as shown by a few ensemble members. Second, if the storm develops somewhat later than the consensus forecast, that could also allow for a colder solution.

 

Right now, if I had to venture a guess, interior sections would have a greater likelihood of seeing appreciable or greater snowfall (4" or more) than New York City and eastward unless blocking develops more quickly and/or the storm's development/progression is slower than currently modeled. One still can't rule out a more significant impact either to the coastal plain or its focus being much farther inland e.g., if blocking develops even more slowly than shown by most of the ensemble members.

Link to comment
Share on other sites

FWIW (and I know you're just parroting what you read from real meteorologists and its them that should know better, not you), there is very little evidence for the importance of "seasonal trends".  I.e., there is very little evidence that if models are behaving in a certain way in a given pattern, that you should expect them to continue to behave that way in a given pattern.  This is not entirely true - certain patterns can interact with a known bias of a model in predictable ways - but big picture, the notion that you have to adjust a models output because, for instance, it's shown a SE bias with the last several storms, is not science-based.

 

You'll make better forecasts if you don't make handwave adjustments to model output.   That's why quantitative consensus forecasts outperform humans these days (to wit, the Weather Channel (quantitative) outperforming the NWS (qualitative/quantitative mix) for high temp forecasts; the model consensus (qualitative) outperforming the NHC (quantitative/qualitative mix) for hurricane track forecasts).

Serious question-

 

Do you have evidence that the weather channel performs quantitative forecasts vs NWS performing qualitative and is there actually something that 'scores' their performance similar to how there are model verification scores? Just curious

Link to comment
Share on other sites

Serious question-

 

Do you have evidence that the weather channel performs quantitative forecasts vs NWS performing qualitative and is there actually something that 'scores' their performance similar to how there are model verification scores? Just curious

There's a site that tries to compare forecasts (http://www.forecastadvisor.com/), but its methodology has limits.

 

For example, an accurate temperature forecast is defined as one that falls within 3° of the actual temperature.

 

Consider the following scenario of two forecasters:

 

Forecaster 1:

Errors for the last 3 temperatures: 3°, 2°, 3° -- Accuracy would be scored as 100%

 

Forecaster 2:

Errors for the last 3 temperatures: 1°,4°, 2° -- Accuracy would be scored as 67%.

 

Yet, if one scored each forecaster on average error, Forecaster 2 (2.3°) would be better than Forecaster 1 (2.7°). Sum of the squares error would yield the same result. This is just for illustrative purposes, as one would be dealing with many more forecasts (the site uses past month and past year).

 

Put another way, the issue of accuracy is more complex than what the site portrays. To some extent, it may well depend on what the users are seeking. Some might want forecasts within a set parameter. Others might want forecasts as close to the actual outcomes as possible.

 

Also, the site doesn't examine strictly quantitative output e.g., MOS, vs. the short list of forecasting entities, so an evaluation of the net gain/loss from the human skill-human bias interaction can't be made. In theory, human input should add value otherwise, there's little point in going beyond automated output.

 

NHC did a study that covered the 2007-11 and 2012 hurricane seasons. In terms track, only the Florida State University Superensemble outperformed NHC (p.31). In terms of intensity, the NHC typically outperformed all the guidance, including the FSU Superensemble (pp.33-34). That's for the Atlantic basin. Interestingly enough, the FSU tool does not cover the eastern North Pacific basin. For that basin, the NHC's track forecasts were superior to all of the available guidance (p.42), as was the case with its intensity forecasts (pp.44-45).

 

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.4690&rep=rep1&type=pdf

 

So, at least with the limited data available (and I'm sure NWS has a lot of internal data), I don't think one can conclusively argue that automated guidance has reached the point where human input no longer provides value. With advances in computing and modeling, the value proposition may well change in the future.

Link to comment
Share on other sites

For the record, he's calling for significant snows NW of 95, more towards I-81 in PA and up into upstate NY and New England.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...