Jump to content
  • Member Statistics

    17,586
    Total Members
    7,904
    Most Online
    LopezElliana
    Newest Member
    LopezElliana
    Joined

March 30 and April Fools Day Potential


stormtracker

Recommended Posts

  • Replies 744
  • Created
  • Last Reply

All this noise...

Models have their inherit flaws. We know a good deal of them and we make forecasts based on what the model output is and what we think will actually happen compared to the model output. The lag of the formation of the coastal low and the cut-off of the moisture flow from the Gulf by convection in the Southeast are two things that are typically mishandled by the models. Using this information, a drier solution for the Mid-Atlantic was more than expected. Most mets caught onto this pretty early, and the real forecasts have been fairly consistent and mostly agree with what the current models are producing.

It not the model's fault that you don't know how to use it. Right or wrong, we're a hell of a lot better off with them than without them. Stop complaining unless you actually know how they work and can make a real argument against them. Meanwhile, feel free to ask questions (see the Ask a Met thread in the main forum) about why the models do what they do and what you can do out anticipate and correct for model inaccuracies.

Link to comment
Share on other sites

All this noise...

Models have their inherit flaws. We know a good deal of them and we make forecasts based on what the model output is and what we think will actually happen compared to the model output. The lag of the formation of the coastal low and the cut-off of the moisture flow from the Gulf by convection in the Southeast are two things that are typically mishandled by the models. Using this information, a drier solution for the Mid-Atlantic was more than expected. Most mets caught onto this pretty early, and the real forecasts have been fairly consistent and mostly agree with what the current models are producing.

It not the model's fault that you don't know how to use it. Right or wrong, we're a hell of a lot better off with them than without them. Stop complaining unless you actually know how they work and can make a real argument against them. Meanwhile, feel free to ask questions (see the Ask a Met thread in the main forum) about why the models do what they do and what you can do out anticipate and correct for model inaccuracies.

I generally agree though I am not sure I agree that most people were saying we'd get very little here. Yesterday was a "bust" for many local forecasters and today seems to be as well. I do think that there are signals you get from models that change season to season that can guide once you know what they are doing "wrong". I also think that there is an overreliance on models these days. Now I would never argue we should not use models as that's a ridiculous argument, but I would maybe argue that forecasts should not be changed model run to model run. I think many blips often are just that so staying steady with a forecast (unless it's a nowcast) for 12-24 hours in between shifts does not hurt. That said, in this day people feel like they should be getting fresh info or sharing such every 6 hours. But I think that can be as detrimental as helpful.

Link to comment
Share on other sites

? the soundings pretty consistently showed some snow for west of the Blue Ridge. in the end I had a trace, on the grass, which melted overnight. there was never a real chance of more than than.

the issue is looking at models only. that's pointless and is simply a 2 dimensional way to look at weather. anyone saying how models are "wrong" when they can't take into account climo, etc., is extremely short-sighted, at best.

modelcasting != forecasting

Not the arguement at all. That ^ is exactly right. But you implied that the models were right. If you look hard enough, you'll probably be able to find one run that was right. But mostly they haven't been too good. A model solution is either going to be correct or it isn't. The fact that it doesn't take into account climo may be a good reason to point to when saying don't take the model at face value. You're making a good point that modelcasting isn't weather forecasting, but that isn't what the original post was about. It was simply that the models haven't been very good at giving a good prediction of what will ultimately happen, and they haven't. The reasons why can be debated.

Link to comment
Share on other sites

All this noise...

Models have their inherit flaws. We know a good deal of them and we make forecasts based on what the model output is and what we think will actually happen compared to the model output. The lag of the formation of the coastal low and the cut-off of the moisture flow from the Gulf by convection in the Southeast are two things that are typically mishandled by the models. Using this information, a drier solution for the Mid-Atlantic was more than expected. Most mets caught onto this pretty early, and the real forecasts have been fairly consistent and mostly agree with what the current models are producing.

It not the model's fault that you don't know how to use it. Right or wrong, we're a hell of a lot better off with them than without them. Stop complaining unless you actually know how they work and can make a real argument against them. Meanwhile, feel free to ask questions (see the Ask a Met thread in the main forum) about why the models do what they do and what you can do out anticipate and correct for model inaccuracies.

I agree with this, and I think it's a good post. Most of us, myself definitely, don't know enough to anticipate and correct for model inaccuracies. All I do is look at the models, then come here to see what the people who do know think about what they are spitting out. I was just saying that I don't think the models have been too good this year. They have been consistent only in the inconsistency between runs. Granted, I've let them reel me in a few times with "good looking" solutions. That's my fault. But the fact that you guys who do know scoff at the solutions the models spit out tells just how accurate they are (this winter). I don't blame the models because what happens in reality isn't what they "exactly" said would. I blame me for being dumb enough to take what it shows at face value. You, and others, taught me a valuable lesson with that storm in late Feb. I was absolutely convinced that Monday morning that I was getting a good snow that night. I saw your forecast snow map and thought this guy is just being negative. Well, I found out better later that night. I guess all I'm saying is that the models, even though they are good enough to do amazing things, aren't good enough to take at face value. That's where the experts come in, and some of us are smart enough to listen. And, one other thing, you are correct that the "real" forecasts have been good for this storm. They have.

Link to comment
Share on other sites

All this noise...

Models have their inherit flaws. We know a good deal of them and we make forecasts based on what the model output is and what we think will actually happen compared to the model output. The lag of the formation of the coastal low and the cut-off of the moisture flow from the Gulf by convection in the Southeast are two things that are typically mishandled by the models. Using this information, a drier solution for the Mid-Atlantic was more than expected. Most mets caught onto this pretty early, and the real forecasts have been fairly consistent and mostly agree with what the current models are producing.

It not the model's fault that you don't know how to use it. Right or wrong, we're a hell of a lot better off with them than without them. Stop complaining unless you actually know how they work and can make a real argument against them. Meanwhile, feel free to ask questions (see the Ask a Met thread in the main forum) about why the models do what they do and what you can do out anticipate and correct for model inaccuracies.

I think you can agree though, the ECMWF had a pretty bad year by its standards, GFS upgrade improved the model, But old biases that were accounted for may not have applied to it this winter, which hurt some forecasts assuming old trends? Then there is the constant posting of "snowfall forecasts" by Topper, Ryan, or other non-mets that only scew the issue, when we have some of the Best Mets in the World on this forum (imfo).

I feel its less of the models, and more of the winter in general. Models would show the snow-hole, just adding to frustration and model bashing.

Example being this :arrowhead:

18zgfs850mbTSLPp06018.gif

Link to comment
Share on other sites

I generally agree though I am not sure I agree that most people were saying we'd get very little here. Yesterday was a "bust" for many local forecasters and today seems to be as well. I do think that there are signals you get from models that change season to season that can guide once you know what they are doing "wrong". I also think that there is an overreliance on models these days. Now I would never argue we should not use models as that's a ridiculous argument, but I would maybe argue that forecasts should not be changed model run to model run. I think many blips often are just that so staying steady with a forecast (unless it's a nowcast) for 12-24 hours in between shifts does not hurt. That said, in this day people feel like they should be getting fresh info or sharing such every 6 hours. But I think that can be as detrimental as helpful.

Do you mind posting/paraphrasing what the local forecasts were? I wasn't paying too much attention to them :P We did a pretty good job ITT, though :D

Part of the reason why I didn't make any real snowfall forecasts 2-3 days out is because I knew the models were going to screw up big time in some key areas and I didn't feel like putting forth the effort in making a map for a rather uneventful storm in our region. The other mets probably have/had something nagging them in the back of their heads, but they're too afraid to stray away from the model output, especially output that's been fairly consistent and supported by other models. Hell, even I made that mistake of going with the consistency/majority output early on, though I wasn't wholly confident that it would verify. But that's when you play the conservative card and wait to bust out the "big storm" card for the Mid-Atlantic until it looks like the models are actually handle it well, which is something JB et al failed to do.

Link to comment
Share on other sites

Pertaining to how people think the models have been performing more poorly this season... I'm quite confident that the current models are the best we've got, and I would take them over any legacy versions any day. A model's accuracy can be directly tied to the weather pattern itself and how easy/difficult it is for the model to calculate the forecast based on these setups. Did the models take a hit in overall accuracy? Probably. Should this be a reason to complain? Hell no. It's not like there's something better out there.

There isn't any easy solution out there to make them drastically better (unless anyone can miraculously stumble upon one). It's like complaining about a car that can't go past 150 mph because you want it to go 200 mph... it's simply beyond its capacity to do such a thing. Wanting the models to perform better is asking the models to go beyond what they're actually capable of, and unless you're one of the people developing/improving the code and computational power, you should spend less time complaining and more time learning how to deal with what we've got.

Link to comment
Share on other sites

Do you mind posting/paraphrasing what the local forecasts were? I wasn't paying too much attention to them :P We did a pretty good job ITT, though :D

Part of the reason why I didn't make any real snowfall forecasts 2-3 days out is because I knew the models were going to screw up big time in some key areas and I didn't feel like putting forth the effort in making a map for a rather uneventful storm in our region. The other mets probably have/had something nagging them in the back of their heads, but they're too afraid to stray away from the model output, especially output that's been fairly consistent and supported by other models. Hell, even I made that mistake of going with the consistency/majority output early on, though I wasn't wholly confident that it would verify. But that's when you play the conservative card and wait to bust out the "big storm" card for the Mid-Atlantic until it looks like the models are actually handle it well, which is something JB et al failed to do.

I honestly don't pay that much attention to forecasts outside here, the NWS and CWG so I can't necessarily give many examples. However, I think many assumed we'd get more rain than seems to be in play.. On snow, anyone with real local knowledge backed up by climo knew a 3-6 with spots of 12 was laughable the instant they heard it. Not to say it CANT happen (though maybe it cant anymore this late--that's another debate)... but it's not something you run with a few days out for sure.

People are quick to disregard seasonal trends. I think that's a mistake. It's a fine line, as trends are made to be broken etc. Still, a fast pattern with a kicker right behind this event means it's not going to have much time to amplify/organize. We've seen that plenty this year-- I think it's fairly common in Nina. Take DT.. his analysis from the other night was actually fairly spot on compared to JB who was hyping snow into D.C., but he still ended up caving to the models it seems and that probably ends up hurting his forecast a bit. It is difficult to look at 4 models in general agreement several days out and say without doubt they are wrong, but they often are at least with some details.

To me each piece of guidance over a 12-24 hour period should be combined into a "superensemble" in the mind, then combined with local knowledge and climo. Now, not many can do that I suppose... though you don't have to be a met to do it well. It's mostly about watching for a long time and learning what is needed etc. I see some comments about only listening to mets (I do however believe Bob Ryan is a met!)... I am not sure that's a terribly valuable rule to live by. I've run across plenty of mets who are sub-par forecasters even if they have a Masters or whatnot. For instance, I strongly believe I can outforecast many mets personally even though I don't understand how MOS was created etc.

I know all of us get wrapped up in watching each model specifically but the excitement over any one run should be much more tempered until you get into a situation like last year where everything tells you big snow (or severe, whatnot) is coming etc. Now that kills a lot of the fun for many. But you still have to wonder if some people will ever learn.

Link to comment
Share on other sites

Pertaining to how people think the models have been performing more poorly this season... I'm quite confident that the current models are the best we've got, and I would take them over any legacy versions any day. A model's accuracy can be directly tied to the weather pattern itself and how easy/difficult it is for the model to calculate the forecast based on these setups. Did the models take a hit in overall accuracy? Probably. Should this be a reason to complain? Hell no. It's not like there's something better out there.

There isn't any easy solution out there to make them drastically better (unless anyone can miraculously stumble upon one). It's like complaining about a car that can't go past 150 mph because you want it to go 200 mph... it's simply beyond its capacity to do such a thing. Wanting the models to perform better is asking the models to go beyond what they're actually capable of, and unless you're one of the people developing/improving the code and computational power, you should spend less time complaining and more time learning how to deal with what we've got.

The models are pretty damn good imo, especially if you're looking for generalities. People expect things which are too close to an acceptable error like knowing exact qpf down to the .1". I think this was a particularly difficult winter given the patterns we saw, and that should have factored into expectations. Yet I have no clue how any of us would know a storm is coming in 5 days or the pattern looks warmer in 10 without models. I also think that people have selective memories and remember model busts or wins differently than they actually occur.

Link to comment
Share on other sites

People are quick to disregard seasonal trends. I think that's a mistake. It's a fine line, as trends are made to be broken etc. Still, a fast pattern with a kicker right behind this event means it's not going to have much time to amplify/organize. We've seen that plenty this year-- I think it's fairly common in Nina. Take DT.. his analysis from the other night was actually fairly spot on compared to JB who was hyping snow into D.C., but he still ended up caving to the models it seems and that probably ends up hurting his forecast a bit. It is difficult to look at 4 models in general agreement several days out and say without doubt they are wrong, but they often are at least with some details.

I think that's the crux of the issue--- it was addressed in the "why do the models suck" thread on the main board. People assume that the models have the same degree of accuracy no matter what the pattern is. So, anything resembling a consensus on the models gets taken as " this will happen." Compare 08/09 with 09/10, for example-- remember early February '09? That was the period with models being all over the place, resulting in the big "accuhype" non-storm. Then of course Philadelphia gets in on the inverted trough and picks up the miraculous 8" of snow after the main body of light snow moved off the east coast. There were so many complaints that period about poor model performance.

Contrast that to last winter, where we were counting on QPF's being correct down to the 0.1". When the models shifted, the forecasts responded, and the forecasts verified. 2/3/10 saw the models dramatically bump up precip in the 12Z runs right before the onset, and we ended up with a quick WSW that of course verified across much of the region. Also, 1/30/10-- as soon as models began to shift the precip back north, it was assumed that that would happen. The blocking and amplified flow made forecasting much easier (and of course it helped that we were in the middle of the precip shield for most of the storms, not on the edges). Even the clipper in early Jan behaved- 0.1" modeled, ~0.1" verified.

Link to comment
Share on other sites

Epic model fail, they cant even get a storm right within 24 hours anymore...I really dont care much about Mets who work at NCEP, models have been terrible this year, bottom line. I know this winter has been based on difficult patterns and phasing storms and all but jeez we should have a little better consistency than this.

Noted. Thanks for your brilliant analysis.

Link to comment
Share on other sites

Pertaining to how people think the models have been performing more poorly this season... I'm quite confident that the current models are the best we've got, and I would take them over any legacy versions any day. A model's accuracy can be directly tied to the weather pattern itself and how easy/difficult it is for the model to calculate the forecast based on these setups. Did the models take a hit in overall accuracy? Probably. Should this be a reason to complain? Hell no. It's not like there's something better out there.

There isn't any easy solution out there to make them drastically better (unless anyone can miraculously stumble upon one). It's like complaining about a car that can't go past 150 mph because you want it to go 200 mph... it's simply beyond its capacity to do such a thing. Wanting the models to perform better is asking the models to go beyond what they're actually capable of, and unless you're one of the people developing/improving the code and computational power, you should spend less time complaining and more time learning how to deal with what we've got.

I've tried to say this (bolded above) time and time again [as an aside, they didn't really take a hit in overall accuracy at all...in fact, the GFS set an all time record in the NH by some metrics this past December]. I realize that part of this notion comes from people only paying attention during high impact events when predictability can be lower, but even then, I'm still not convinced the models really did as poorly as being portrayed here (relative to past performance).

I can dig up the time series showing the yearly improvement relative to a frozen model we've been running for over 20 year; but I don't think people will get it even then.

Link to comment
Share on other sites

I think that's the crux of the issue--- it was addressed in the "why do the models suck" thread on the main board. People assume that the models have the same degree of accuracy no matter what the pattern is. So, anything resembling a consensus on the models gets taken as " this will happen." Compare 08/09 with 09/10, for example-- remember early February '09? That was the period with models being all over the place, resulting in the big "accuhype" non-storm. Then of course Philadelphia gets in on the inverted trough and picks up the miraculous 8" of snow after the main body of light snow moved off the east coast. There were so many complaints that period about poor model performance.

Contrast that to last winter, where we were counting on QPF's being correct down to the 0.1". When the models shifted, the forecasts responded, and the forecasts verified. 2/3/10 saw the models dramatically bump up precip in the 12Z runs right before the onset, and we ended up with a quick WSW that of course verified across much of the region. Also, 1/30/10-- as soon as models began to shift the precip back north, it was assumed that that would happen. The blocking and amplified flow made forecasting much easier (and of course it helped that we were in the middle of the precip shield for most of the storms, not on the edges). Even the clipper in early Jan behaved- 0.1" modeled, ~0.1" verified.

Well, to me, one real issue under the surface is we all (even the complainers) know the models are quite good and continually getting better. So, it's almost counter-intuitive to look at guidance and say it's probably wrong because such and such. Then you have the issue where one model or the other will go through a "slump" or do better than expected. Getting 'burned' by the Euro several times this winter made things even more confusing.

Some claim that a met/forecaster is just regurgitating model data. And I'm not sure that's 100% wrong, but you clearly get more from the deal through interpretation by a human -- unless they are primarily trying to drive hits to their site! That adds a lot of extra error though at times. Why did everyone assume the NAM was wrong with no wrapped up low this far south at 84? "It's the NAM at 84". Oops. No one is really immune to it -- Wes is close.

Some rains are breaking out now (interesting banding nw of the cities)... but my forecast from yesterday is wrong as far as I'm concerned. I felt there was a shot this could happen but I leaned toward the GFS/EURO not busting QPF by a sizable factor 24-36 hours out. Then the lessons from the past come flashing back.. ;)

Link to comment
Share on other sites

The models are pretty damn good imo, especially if you're looking for generalities. People expect things which are too close to an acceptable error like knowing exact qpf down to the .1". I think this was a particularly difficult winter given the patterns we saw, and that should have factored into expectations. Yet I have no clue how any of us would know a storm is coming in 5 days or the pattern looks warmer in 10 without models. I also think that people have selective memories and remember model busts or wins differently than they actually occur.

I think that's the most amazing thing about weather models. Electricity and metal say the weather will be stormy in 3 days, when the storm doesn't even exist yet, and it happens. I know there's not some gremlin in the models that designs them to show me the exact scenario that I'd like to see 4 days out, only to then yank it away, but it sure was frustrating to see that happen so many times this winter. That's the part of model watching that is just beyond my knowledge. I don't know when to cast a skeptical eye toward them and when to buy in. I guess all I can hope for is to listen (read) and try to learn.

Link to comment
Share on other sites

Some rains are breaking out now (interesting banding nw of the cities)... but my forecast from yesterday is wrong as far as I'm concerned. I felt there was a shot this could happen but I leaned toward the GFS/EURO not busting QPF by a sizable factor 24-36 hours out. Then the lessons from the past come flashing back.. ;)

...or even 6 hours out, from the 18Z model runs? Some parts of SE MA near the canal have gotten a 3" blast of snow already and may end up with more snow before the changeover than much of northern CT does overnight. None of the models (except the RUC) showed that happening verbatim.

Link to comment
Share on other sites

I flew to FT Myers and back today. Let me tell you the thunderstorms in Florida where quite an amusement park ride. Never felt turbulence that bad, especially going there in the Gulfstream II on the way there, when we were in tight airspace and forced by ATC to fly right into one of them. It only got hit by lightning once, luckily no real damage.

Link to comment
Share on other sites

I've tried to say this (bolded above) time and time again [as an aside, they didn't really take a hit in overall accuracy at all...in fact, the GFS set an all time record in the NH by some metrics this past December]. I realize that part of this notion comes from people only paying attention during high impact events when predictability can be lower, but even then, I'm still not convinced the models really did as poorly as being portrayed here (relative to past performance).

I can dig up the time series showing the yearly improvement relative to a frozen model we've been running for over 20 year; but I don't think people will get it even then.

Thanks for the response, dtk. I thought I should clarify as an aside that when I mention accuracy taking a hit I mean it as a very small decline from the relative performance, but then again that's really mostly based on just the high impact events. There's a lot of getting caught up in the details that indicate significant errors in select areas, but as a whole the models are performing just as amicably as they have ever been.

Link to comment
Share on other sites

Thanks for the response, dtk. I thought I should clarify as an aside that when I mention accuracy taking a hit I mean it as a very small decline from the relative performance, but then again that's really mostly based on just the high impact events. There's a lot of getting caught up in the details that indicate significant errors in select areas, but as a whole the models are performing just as amicably as they have ever been.

I knew what you meant and tried to include a caveat about specific events (and you're right, especially for high impact events and the IMBY nature of noticing performance/interpretation). Expectations (by some) seem to be beyond our current capabilities, and it can be frustrating. I really do appreciate the thoughtful, useful comments on this topic (model performance in general) by you, Ian and others (so for that, thanks!).

Link to comment
Share on other sites

? the soundings pretty consistently showed some snow for west of the Blue Ridge. in the end I had a trace, on the grass, which melted overnight. there was never a real chance of more than than.

the issue is looking at models only. that's pointless and is simply a 2 dimensional way to look at weather. anyone saying how models are "wrong" when they can't take into account climo, etc., is extremely short-sighted, at best.

modelcasting != forecasting

Which is exactly what JB and others seemed to do with this system. At one point, JB was talking about widespread 3-6" amounts, with localized 12" amounts for points north of DC. Never could figure out how he was coming up with that. Frankly, that's a pretty extreme forecast for January/February, much less late March/early April, given the climatology and how models have underperformed this entire winter for our area.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...