RU848789 Posted February 3, 2015 Share Posted February 3, 2015 If this has already been covered, elsewhere, please point me to the source. Otherwise... Based on my gut instinct and knowledge of computation fluid dynamics models, which are quite similar to meteorological models (except ours are on a much smaller scale of inches to feet and they do include the extra complexity of chemical reactions), I'd much rather be in the bullseye, every model run, than not. Models of the weather, which have chaotic uncertainties, while being deterministic, in theory, should, theoretically, not exhibit biased "trends" per se - they should show steadily decreasing oscillations with each subsequent model run around a mean solution. That is, if there are not step changes or even discontinuities in initial conditions in later runs, which we all know can happen, especially as much more data is ingested into later runs. So, in theory, if you're in the bullseye 4 days out, for example, 100 times, there should be equal times where reality is the storm goes north of the day 4 track vs. the storm going south of the day 4 track, with a maximum number of outcomes being in the bullseye and more of the outcomes would at least give a storm of some sort. Whereas, at 4 days out, if the track is out to sea with minimal snowfall, for example, sure there are less outcomes, in reality, where the track is way north and it's a rain event vs. the bullseye case, but there are also a large number of outcomes, in reality, where there is no storm at all and probably close to equal numbers of outcomes where the storm is a big snow hit. I think you would only not want to be in the bullseye X days out if there were a known model bias (especially if the bias existed for all of the major models) at that time point. I think the preceding, while perhaps not perfectly correct, is directionally correct - please let me know if I have that wrong, though. Which leads me to my central question in the thread title. I know there are all kinds of "verification scores" for the various models, but as far as I know, these are aggregate scores for actual vs. modeled output for a month or a year of all weather, with the focus on temperature and precip accuracy. So, do verification scores actually exist for each model (or a model consensus) for all snowstorms or all east coast snowstorms or all NYC snowstorms or for types of snowstorms, such as simple clippers, Miller A's, Miller B's, SWFE events, etc.? Those would be really interesting scores to compare to truly know which models are best in which situations, since most of what I ever see around here appears to be anecdotal discussion of "known" biases, where no actual data on those "known" biases are ever shared, implying that maybe there aren't well known biases for cyclones/snowstorms. I'd much rather know snowstorm verification scores than scores for the 300+ days of the year with really boring weather. Feedback would be most welcome... Link to comment Share on other sites More sharing options...
RU848789 Posted February 5, 2015 Author Share Posted February 5, 2015 "Fascinating question, RU, let me tell you what I know about this topic..." :>) With so much talk, conjecture, and misinformation on these boards about the models and their performance during snowstorms, thought this would provoke some discussion. Maybe I should post it on the main board instead... Link to comment Share on other sites More sharing options...
RU848789 Posted February 13, 2015 Author Share Posted February 13, 2015 ok, one last bump on this board, as this is the subforum where I spend most of my time and I'd be interested to hear feedback, thoughts, ideas, etc., especially on any verification scores for models for snowstorms for this region. Anyone? Anyone? Bueller? Link to comment Share on other sites More sharing options...
danstorm Posted February 14, 2015 Share Posted February 14, 2015 ok, one last bump on this board, as this is the subforum where I spend most of my time and I'd be interested to hear feedback, thoughts, ideas, etc., especially on any verification scores for models for snowstorms for this region. Anyone? Anyone? Bueller? Ask this in New England where mets are more generous with their time...that's all I've got. Interesting questions. Link to comment Share on other sites More sharing options...
Rjay Posted February 27, 2015 Share Posted February 27, 2015 Bump Link to comment Share on other sites More sharing options...
LongIslandWx Posted February 27, 2015 Share Posted February 27, 2015 I'm not aware of specific model verification scores for models for snowstorms in the northeast. However, here is a link to model verification scores kept by NCEP for various models as a whole: http://www.nco.ncep.noaa.gov/sib/verification/ Link to comment Share on other sites More sharing options...
RU848789 Posted February 27, 2015 Author Share Posted February 27, 2015 I'm not aware of specific model verification scores for models for snowstorms in the northeast. However, here is a link to model verification scores kept by NCEP for various models as a whole: http://www.nco.ncep.noaa.gov/sib/verification/ Perhaps I'm just incompetent, but I couldn't find model verification scores at the link you provided. Maybe it's buried in one of the links and I just couldn't find it. I've seen people post graphics comparing these scores for all of the models for certain periods of time, where the Euro is always "best" but again, that's not specific to a type of weather. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.