Jump to content
  • Member Statistics

    17,606
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Why Are Models So Good?


Recommended Posts

I feel a need to start this thread for a couple of reasons. First, all numerical guidance had issues simulating this storm. However, the numerical models, in reality, performed quite admirably. Discussing this storm with fellow EC mets and non-mets provided a lot of insight into how others perceive numerical weather models and how they approach weather forecasting in general. I don't want to get too far off tangent, but I am a little aggravated sometimes by the lack of understanding the general weather enthusiast has regarding numerical weather models and their capabilities. Should we all be experts? Nonsense--that would be impossible for everyone to have an innate understanding of the inner workings of a numerical model. However, considering we use the numerical guidance on a daily basis, shouldn't we at least understand how they work? If anything, folks should not be bashing said guidance unless they understand what is going on. For enthusiasts but non-mets, nobody expects a level of knowledge regarding forecasting/models as a met would have, but one should consider learning about the guidance/models/forecasting before bashing the models or the forecaster. Read on.

As we entered into day 2-3 with the storm threat, the NAM was one of the first models to suggest the extreme potential with the setup. We can throw around all sorts of technical weather jargon, but this setup had amazing potential, but it had a very small window of development. Most of the global models, up to this point, were showing a threat with the ensemble guidance more amped than the global operational models. It did seem the operational models were slow to react to height field amplification trends with the rapid breakdown of the -NAO blocking pattern that had dominated up to that point, and changes made in the height field seemed to take 2-4 iterations to trickle down into the surface fields as this storm passed through the Ohio Valley. As the models continued to trend more amplified each run in the height field with a much more magnified S/W, guidance continued to hint at the potential for a strong east coast threat. There was, however, a trend developing, and that was the OV low would be much stronger than initially believed (and farther N) in response to the amplified height field. Moreover, the response to the more amplified pattern and slower track of the ejecting plains low, the thermal advection pattern (warm air advection ascent) would support a weak coastal GOM/East Coast surface low even though there was no "phase" of the southern stream wave (early guidance suggested the southern wave would phase in the mid-levels). What did all these trends lead to? The potential for a highly non-linear positive feedback coastal storm threat. Getting back to our friend the NAM, as the model guidance continued to trend more amplified in the height field, the NAM and SREF (NCEP mesoscale ensemble guidance) began to show a more significant coastal threat with rapid positive feedback and a "hooking" and stalling coastal. With the E-NE ejecting shortwave over the Gulf Stream in the vicinity of the Mid-Atlantic, it was clear the window of time for development was very small. I was aghast to see a lot of meteorologists and non-meteorologists simply "toss" the NAM and SREF guidance out the window without even giving it consideration. Typical examples of quotes included "It is just the NAM", "78 hours isn't within the NAM's wheelhouse!" or "The ECMWF and GFS are east--toss it!". In my opinion, this was unsound meteorology, and it likely stems from a lack of understanding of the numerical weather models themselves. The models did a superb job of showing the meteorologists verifiable trends. The ECM/GFS get special kudos for slowly but continuously suggesting height field amplification trends in the upper level height field, and it was becoming clear potential existed for a powder keg explosion off the coast. Why were folks simply disregarding the NAM/SREF in lieu of the trends? I honestly don't know.

I am an advocate of weather analysis. This includes breaking down and analyzing the weather pattern at hand and fully understanding the atmospheric setup. We can key into certain patterns/trends, and we can focus on potential even if the models themselves don't hint at it. This includes using all those dynamic and mathematical equations us meteorologists learned in college as undergraduates and graduates. I mean, come one, how can we break down a threat if we don't understand what is happening? How can we, as meteorologists, assign a probability to an event if all we do is regurgitate the models (model-casting). Making this worse, how can we even correctly use the numerical guidance if we don't know how the models works? This is the problem today, and this is likely why some folks were so quick to believe the NAM simply could not be right in lieu of the GFS/ECM. Kudos to the forecasters/meteorologists/NWS for realizing the potential, but I was aghast at how many simply dismissed it with the evidence at hand.

So, getting back to the topic of models, why should we know how models work? The better question may be, if there is a potential threat, how do we know if the model is wrong or out to lunch? Given a good analysis, it would have been clear to a meteorologist that a ticking time bomb was in the waiting. The depth and strength of the dynamic tropopause over the coast was incredible as the leading S/W ejected, and it was clear that the potential for an extreme atmospheric response was possible to such a hydro-dynamically unstable setup. Yet, as we neared the event and the high resolution non hydrostatic guidance continued to suggest a threat while the global operational models were E, I continued to see trained mets "tossing" the guidance or simply regurgitating the "skill scores" of the global models as reasoning to toss the other guidance. Heck, if the globals track E consistently, it must mean they are correct? I will leave it to others to re-consider their knowledge of numerical models, how they work, and perhaps, how they forecast. Model-casting won't cut it, and I think it is high time all meteorologists put their degree to work. I find myself learning and busting daily, and I put forth full effort into correcting those mistakes--I hope others will make an effort to do so as well. But how do we correct our own errors if we don't analyze the forecast to begin with? or if we don't know how the models work themselves?

To end, how did the numerical guidance fare? In some ways, awful, and in others, very very well. Overall I say they did excellent considering the situation at hand. The storm eventually tanked out farther N and W than any guidance, and the storm was much deeper than projected. By 06Z, the low was 998 hpa, already 3-5 hpa lower than simulated by the 0Z guidance and much more intense (intensity refers to the gradient, not the central pressure). By 09Z, rapid deepening was continuing and the pressure at Montauk, NY (KMTP) was down to 988 hpa, about 6-8 hpa lower than guidance suggested. As the storm headed of the LI coast, it continued to bomb. Eventually, central pressure dropped 18 hpa/5 hours at Montauk, and some off-shore buoys were even more impressive. So how did the models fare? In retrospect, they fared as one would have expected if they understood the positives/negatives/bias/capabilities of the numerical guidance and the dynamic situation at hand. Non-hydrostatic mesoscale guidance won the day, and the global models never fully caught on. It was sad to me, as the event unfolded, to see weather enthusiasts and non-mets alike "jumping off the bridge" around 21-00z because satellite was not exploding (it wasn't supposed to yet). I heard unsound meteorological comments from meteorologists such as "The Ohio Valley surface low is dominating, therefore the coastal will head east" (really?). In fact, instead of NOWcasting like some were doing, some folks were happy to glance at the new models coming in without even considering how well the previous models were handling the situation! In reality, they should have been keying in to the even greater explosive potential as the OV S/W continued to amplify and strengthen (the resulting OV surface low was deeper and stronger than projected) and the off-coast coastal low was consistently deeper than guidance projected. These small facts are amazingly important in a NOW-casting event where rapid positive feedback cyclogenesis is possible since things can change rapidly. Kudos to the great folks who I tracked the storm with and those who provided insightful thoughts, comments, and knowledge (Earthlight especially--what an amazing forecast/meteorologist already); I am happy to know there are folks out there putting in the time to perform a true analysis. It seems to be becoming a rarity these days; I think we can change that though.

Link to comment
Share on other sites

Good post though the non-hydrostatic models had a pretty high bias down this way (DC area) on a couple of runs but basically agree with your comments. I think part of the problem is people tend to see things in black or white. There was always potential for a big storm north of DC but even that wasn't assured. It took the stronger 500h circulation and one that didn't' get too far north. Models are tools and unlike people are much better at figuring out the non-linear feedbacks that typically occur when a major storm is in the offing.

Link to comment
Share on other sites

Good post though the non-hydrostatic models had a pretty high bias down this way (DC area) on a couple of runs but basically agree with your comments. I think part of the problem is people tend to see things in black or white. There was always potential for a big storm north of DC but even that wasn't assured. It took the stronger 500h circulation and one that didn't' get too far north. Models are tools and unlike people are much better at figuring out the non-linear feedbacks that typically occur when a major storm is in the offing.

Lots of people on here that are tools as well Wes

Link to comment
Share on other sites

Good post though the non-hydrostatic models had a pretty high bias down this way (DC area) on a couple of runs but basically agree with your comments. I think part of the problem is people tend to see things in black or white. There was always potential for a big storm north of DC but even that wasn't assured. It took the stronger 500h circulation and one that didn't' get too far north. Models are tools and unlike people are much better at figuring out the non-linear feedbacks that typically occur when a major storm is in the offing.

Yeah Wes, I agree. I saw the struggles down in that region with both the non hydrostatic models and the global models. The guidance had issues with the exact timing of capture of the coastal low and the positioning of the heavier bands of snow in NOVA/CD/BA. Even then, no model seemed to do too hot there either, but they did a good enough job with both potential and variability. I think you and CWG did well with that event considering how difficult a forecast it was.

Link to comment
Share on other sites

Yeah Wes, I agree. I saw the struggles down in that region with both the non hydrostatic models and the global models. The guidance had issues with the exact timing of capture of the coastal low and the positioning of the heavier bands of snow in NOVA/CD/BA. Even then, no model seemed to do too hot there either, but they did a good enough job with both potential and variability. I think you and CWG did well with that event considering how difficult a forecast it was.

Great post! People (mets included, and myself at times) will try and personify models in some way, and then make generalizations as we tend to do when a decision is presented (is this model good or bad)......Yet the true, step back and take home message is that each model is numerically driven guidance! That's it! Numbers....lot's of them....arranged slightly differently and integrated in similar (but not exactly the same) fashion over time.

The models, in this past case, had spreads no different than most any other times they prognosticate any entiity in the global atmosphere....the enhanced scrutiny is/was driven soley by the fact that this entity was at a critical stage of it's development that coincided with a HUGE populous watching and percieving every mile of shift in the various models.....you don't hear about such few mile shifts when Greenland has a 5 day progged system shift 100 miles over the course of their runs leading up to verification.

No model EVER gets it completely "right" (at some arbitrary scale)....not 5 days out, not 2 days out, not 2 hours out. We can only compare scores relative to verification and relative to other model camps. We are constantly learning biases of models and deriving nice databases of historical analogues to make better forecasters out of us all, if we understand that the models will always have spreads, "hiccups" (I hate that personification the most), and difficult patterns to resolve and integrate.

I hammer home as much as I can, without PO'ing folks, that models are tools.....learn how to use different tools for different projects, and we can all enjoy this love we all share a bit more, without "model frustration."

Link to comment
Share on other sites

Yeah Wes, I agree. I saw the struggles down in that region with both the non hydrostatic models and the global models. The guidance had issues with the exact timing of capture of the coastal low and the positioning of the heavier bands of snow in NOVA/CD/BA. Even then, no model seemed to do too hot there either, but they did a good enough job with both potential and variability. I think you and CWG did well with that event considering how difficult a forecast it was.

I was happy with the results as we never really had the potential for a big one like places to the north. If the 500 low could have dug more to the south, we might have but with no ridging behind it is pretty hard to get that much digging. One thing the nam did very well was the thermal structure within the clouds once it backed off the modest amounts down here. One thing I regret is only mentioning freezing rain once as the sounding did show the potential and I blew them off as I've seen the NAM sometimes overdo keeping the cloud tops low.

Link to comment
Share on other sites

I have to add my two cents on this topic....depending on which part of the country you are from....the models did A-OK or they are 100% flip flop within a very short time. Their prediction of the west coast last weekend is case in point. Based on what happend out here I will never trust any of them again...EVER more than 24 hours out.!

Link to comment
Share on other sites

Simply an outstanding post..well done. You make excellent points. I wasn't forecasting this one officially but it was interesting to follow the models and see how it played out. This event shows that the NAM can really be an excellent model in certain situations.

Link to comment
Share on other sites

I have to add my two cents on this topic....depending on which part of the country you are from....the models did A-OK or they are 100% flip flop within a very short time. Their prediction of the west coast last weekend is case in point. Based on what happend out here I will never trust any of them again...EVER more than 24 hours out.!

That's the point.....it's guidance, there are reasons that they error, we just can't always pinpoint each and every synoptic, meso, nor micro scale features that either initialize improperly or just aren't resolved enough....not to mention the varying physics packages/assuptions of the gazillion models and their offspring.

So don't "trust" models. Use them like you would any new tool. Learn their strengths, weaknesses and biases. It takes time, but many of these characteristics of many models are already documented. But there are surely more to uncover.

Link to comment
Share on other sites

Good post though the non-hydrostatic models had a pretty high bias down this way (DC area) on a couple of runs but basically agree with your comments. I think part of the problem is people tend to see things in black or white. There was always potential for a big storm north of DC but even that wasn't assured. It took the stronger 500h circulation and one that didn't' get too far north. Models are tools and unlike people are much better at figuring out the non-linear feedbacks that typically occur when a major storm is in the offing.

Good post Wes...most of us Mets in SNE took this threat pretty seriously before the NAM could even see it in the 96h range. But his point is good in that once it actually started bombing, the global models would lose the ability to handle it because it was so comapct and so intense with a ton of convection...a classic storm for the NAM to hit the HR vs other guidance...but the key was when does this bomb? You used a great rule in the vortmax track that it doesn't bomb until too late for DC.

But up here, it was a big deal. There were differences in the height fall rate amongst the models up here. Some suggested the max height falls might not occur until a little further east...but the NAM and SREF indicated it could be closer to the coast which would be a much bigger impact on southern New England for a heavy snowfall vs a moderate to heavy snowfall.

I wrote a post mortem analysis on this system here:

http://www.americanw...rn-into-a-hecs/

The exact details weren't apparent until the storm was already underway. We all knew it would be a big snow, but the difference between 12" and 24" was quite a fine line for southern New England. The storm needed to get caught by the rapidly developing upper level low early enough to give a monster storm, and again, that wasn't apparent until the storm was already going on. Even the NAM wasn't good enough for this process for western areas of SNE.

We hedged at something like a 12-18" forecast with pockets to 20"+ which ended up being too conservative, but not terrible. We knew the snow rates would be off the charts, but the key was how long do they stay over the area....the upper level low developing fast enough was the key to this...instead of a 6 hour snow bomb, it was a 12-15 hour snow bomb with those snow rates and the storm produced a much larger area of 20"+ than first thought.

As you are one the best mets on the board to realize this (I'm talking to others here and not you since you are the best there is with model qpf) ...the model qpf doesn't mean much unless you have the upper air setup to support it. It doesn't matter if the model spits out 1.75" of qpf over your head unless it makes sense from an upper air perspective. That was the biggest problem we were wrestling with.

Link to comment
Share on other sites

Good post Wes...most of us Mets in SNE took this threat pretty seriously before the NAM could even see it in the 96h range. But his point is good in that once it actually started bombing, the global models would lose the ability to handle it because it was so comapct and so intense with a ton of convection...a classic storm for the NAM to hit the HR vs other guidance...but the key was when does this bomb? You used a great rule in the vortmax track that it doesn't bomb until too late for DC.

But up here, it was a big deal. There were differences in the height fall rate amongst the models up here. Some suggested the max height falls might not occur until a little further east...but the NAM and SREF indicated it could be closer to the coast which would be a much bigger impact on southern New England for a heavy snowfall vs a moderate to heavy snowfall.

I wrote a post mortem analysis on this system here:

http://www.americanw...rn-into-a-hecs/

The exact details weren't apparent until the storm was already underway. We all knew it would be a big snow, but the difference between 12" and 24" was quite a fine line for southern New England. The storm needed to get caught by the rapidly developing upper level low early enough to give a monster storm, and again, that wasn't apparent until the storm was already going on. Even the NAM wasn't good enough for this process for western areas of SNE.

We hedged at something like a 12-18" forecast with pockets to 20"+ which ended up being too conservative, but not terrible. We knew the snow rates would be off the charts, but the key was how long do they stay over the area....the upper level low developing fast enough was the key to this...instead of a 6 hour snow bomb, it was a 12-15 hour snow bomb with those snow rates and the storm produced a much larger area of 20"+ than first thought.

As you are one the best mets on the board to realize this (I'm talking to others here and not you since you are the best there is with model qpf) ...the model qpf doesn't mean much unless you have the upper air setup to support it. It doesn't matter if the model spits out 1.75" of qpf over your head unless it makes sense from an upper air perspective. That was the biggest problem we were wrestling with.

It was a fun NOWcast, for sure. Watching the event unfold that morning was exciting because it was obvious that things were potentially going to be even more amped and explosive than the 12/18/ and then the 0Z guidance suggested. The point on timing was so important with this system as you said, and a 1-2 hour delay in when the storm began rapid cyclogenesis in such a compact storm could result in a huge track difference. In many ways, I wasn't trying to give the NAM more credit than was due, not at all, but that all numerical models are different, all have certain capabilities/strengths based on their configuration, and all played a key role in diagnosing the threat. Model bashing is often done with no understanding of what is truly happening, and more often than not it leads to poor results if one piece of guidance is used liberally over others with no scientific or verifiable reason.

Link to comment
Share on other sites

It was a fun NOWcast, for sure. Watching the event unfold that morning was exciting because it was obvious that things were potentially going to be even more amped and explosive than the 12/18/ and then the 0Z guidance suggested. The point on timing was so important with this system as you said, and a 1-2 hour delay in when the storm began rapid cyclogenesis in such a compact storm could result in a huge track difference. In many ways, I wasn't trying to give the NAM more credit than was due, not at all, but that all numerical models are different, all have certain capabilities/strengths based on their configuration, and all played a key role in diagnosing the threat.

I think you are right in the NAM got it most right...for where the storm impacted the largest...obviously a little further SW in the infancy stages, the NAM was a little enthusiastic...but once the storm got going...the NAM was correct....especially in southern New England. It was just a matter of waiting until the storm started to nuke out...once that started, then the NAM had it. It had some runs where it tried to do it too quickly (the runs that gave Philly a foot and Baltimore 6-8")....but the overall idea of once it got going that it would be a very close tucked track was correct...and so were the prolific qpf totals over much of SNE.

Link to comment
Share on other sites

Great post! People (mets included, and myself at times) will try and personify models in some way, and then make generalizations as we tend to do when a decision is presented (is this model good or bad)......Yet the true, step back and take home message is that each model is numerically driven guidance! That's it! Numbers....lot's of them....arranged slightly differently and integrated in similar (but not exactly the same) fashion over time.

The models, in this past case, had spreads no different than most any other times they prognosticate any entiity in the global atmosphere....the enhanced scrutiny is/was driven soley by the fact that this entity was at a critical stage of it's development that coincided with a HUGE populous watching and percieving every mile of shift in the various models.....you don't hear about such few mile shifts when Greenland has a 5 day progged system shift 100 miles over the course of their runs leading up to verification.

No model EVER gets it completely "right" (at some arbitrary scale)....not 5 days out, not 2 days out, not 2 hours out. We can only compare scores relative to verification and relative to other model camps. We are constantly learning biases of models and deriving nice databases of historical analogues to make better forecasters out of us all, if we understand that the models will always have spreads, "hiccups" (I hate that personification the most), and difficult patterns to resolve and integrate.

I hammer home as much as I can, without PO'ing folks, that models are tools.....learn how to use different tools for different projects, and we can all enjoy this love we all share a bit more, without "model frustration."

Yes well said, in fact, much better said than my convoluted discussion that touched on far too many topics. One of the many points I was trying to hit home on was models are guidance only. Nothing should be taken verbatim, especially qpf fields and individual model tracks. Even run-by-run track changes meant little with this EC coastal storm since it was obvious this was going to be a highly non-linear beast with the potential for large track changes run by run. Just part of the deal with such a compact and intense storm. Every model played a part here, and folks tossing guidance for no apparent reason made little sense.

Link to comment
Share on other sites

Yes well said, in fact, much better said than my convoluted discussion that touched on far too many topics. One of the many points I was trying to hit home on was models are guidance only. Nothing should be taken verbatim, especially qpf fields and individual model tracks. Even run-by-run track changes meant little with this EC coastal storm since it was obvious this was going to be a highly non-linear beast with the potential for large track changes run by run. Just part of the deal with such a compact and intense storm. Every model played a part here, and folks tossing guidance for no apparent reason made little sense.

We discussed this in Analog's thread awhile back, but I'll reiterate some of the same points: when people have a strong emotional stake in something-- all rationality goes out the window. Not only that, they cant have any room for uncertainty in their lives-- it's like some kind of immediate gratification frenzy where they "have to" take a given model run as is (whether positive or negative)..... they dont even consider its biases, trends, or anything else..... they just take "face value" and make it reality in their minds, because it's the easiest thing to do. A more advanced case of it occurs when they let their irrationality actually influence their perception and see features on the models that simply arent there (I remember when we used to have unmoderated model threads awhile back, I would see different posters interpreting models in completely different ways-- completely opposite of each other, sometimes in back to back posts!) Of course, that makes them bipolar when the run to run variance is so high and their emotions are on a seesaw. It's hard for some people to see models as just one little piece of the puzzle (or, to be more accurate, the models' individual solutions are single pieces of the puzzle, some that fit and some that do not) while ensembles, which arent individual pieces of the puzzle, give you a fuzzy snapshot of what the whole puzzle might look like..... and thus can be more valuable than individual pieces. I apologize in advance for the imperfect analogy lol :P

And while we're on that subject, we can start another discussion about how people take analogs too literally also. It's another function of that instant gratification fetish-- what they fail to understand is that so much of what occurred in a given season is based on randomness (basically a small amount of chaos can cause large downstream changes) that analogs are no more than pieces of the puzzle also. A storm can miss a given location by 50 miles and still be a perfectly good analog because of the basic set up-- and that's just one storm. Just imagine how much chaos increases over a whole season! And while having multiple analogs does make the picture a bit clearer-- the "noise" of chaos will always exist and will need to be accounted for in variance.

Link to comment
Share on other sites

I just want to say that I fully endorse this post, and hope people take this seriously and think about what is said. These things are automated model simulations, and if you take them verbatim, you are going to get burned every time. I work in data assimilation and NWP, and it continues to amaze me the global models can sniff potential developments like this out over a week ahead of time. The comments about the size of the storm and how to use the mesoscale (nonhydrostatic) and HiRes guidance are especially important. These models are going to be overdone in terms of QPF (especially the HiRes 4km runs), but they sure did a fantastic job showing potential as has been discussed.

For anyone interested, there are a ton of plots showing a side by side comparison of some of the individual model QPF forecasts with the CPC 1/8 degree observation based analysis:

http://www.emc.ncep.noaa.gov/mmb/ylin/pcpverif/daily/2011/20110112/

The scale on the figures is fixed and the plots are automated (so sometimes difficult to ascertain detail), but they always interesting to look at in hindsight.

Link to comment
Share on other sites

I just want to say that I fully endorse this post, and hope people take this seriously and think about what is said. These things are automated model simulations, and if you take them verbatim, you are going to get burned every time. I work in data assimilation and NWP, and it continues to amaze me the global models can sniff potential developments like this out over a week ahead of time. The comments about the size of the storm and how to use the mesoscale (nonhydrostatic) and HiRes guidance are especially important. These models are going to be overdone in terms of QPF (especially the HiRes 4km runs), but they sure did a fantastic job showing potential as has been discussed.

For anyone interested, there are a ton of plots showing a side by side comparison of some of the individual model QPF forecasts with the CPC 1/8 degree observation based analysis:

http://www.emc.ncep..../2011/20110112/

The scale on the figures is fixed and the plots are automated (so sometimes difficult to ascertain detail), but they always interesting to look at in hindsight.

Tremendous link dtk, thank you.

Here is the main link for all days/months for those interested.

http://www.emc.ncep.noaa.gov/mmb/ylin/pcpverif/daily/

Link to comment
Share on other sites

Tremendous link dtk, thank you.

Here is the main link for all days/months for those interested.

http://www.emc.ncep....pcpverif/daily/

Killer post in the beginning man, just awesome!

These are the things I miss. Dealing in energy, with a focus more on longer term trends, I miss out on partaking in some of the short term fun. Don't get me wrong, these storms are still big in the energy world, but I don't have the time to devote as much attention to the important small scale details. Love what I do now, but I do miss some of the "nowcasting" fun like you described in the beginning.

Link to comment
Share on other sites

great post as usual BI

defintely agree with you on the thread title, im a modelologist (deifntely not a meteorolgist, those equations are insane :lol: )....when i first got into following the models more closely, i used to bash them all the time, esp the NAM. now i dont as ive learned more about them and what to expext from them, bias wise. i dont know much, thats for sure, but bashing the models seems pointless, i dont even agree with bashing the GFS on its latest performance......

as LEK (George) points out, they are tools that should be used approrpiately.

Link to comment
Share on other sites

I’ve thought some about this thread and I want to bring up a couple points. While it is definitely true that a lot of mets don’t work as hard as they should in doing a thorough old fashion analysis and understanding how the models work I can think of another reason for this besides people simply not caring enough to put in the extra time and effort. Most young mets today are working in the private sector and face very large workloads - arguably much larger than NWS forecasters and usually without the extra backup support during times of active weather since that costs companies more money.

As I see it, the job of working as a forecast meteorologist has two parts to it: 1) analyses and forecasting 2) making the forecast - working on the actual deliverables, whether they be discussions, digital forecasts made on an interface, severe weather products, or whatever. It is important to note that #2 (making the forecast) is not actually forecasting! It may seem like it is since often the style of mets to jump back in forth between #1 and #2. Yet in an ideal world part #2 is what you do after you thoroughly spend the time doing part #1 properly which involves doing sfc analysis, analyzing obs, soundings, etc. Most companies stretch their employees thin such that a very significant amount of time (most time actually) must go into #2 since those are the actual deliverables that make money - the more products you can sell the better. Unfortunately, this makes it very tempting for mets to rush through #1 and not do it properly so they can meet deadlines. As a result many take “heuristics” or mental shortcuts and rules of thumb (nam is generally unreliable beyond 48 hours, ect) as a quick way to get to the forecast answers needed for #2. Is this an excuse? Yes. Is it acceptable excuse? No. The way around this I’ve found is devote a significant amount of time during quieter weather days to brush up on severe weather forecasting and skills as well as efficient forecast methods. This is something I do even on my own time and try to share the results as much as possible with my coworkers. Putting in this time during slow weather days is vital, IMO, so that your skills are sharp and you are prepared to work quickly, efficiently, but in a way that is meteorologically sound when the s**t hit’s the fan and you are pressed for time to do forecasting and analyses. Anyway, take it for what its worth. Just my opinion as to why a lot of mets are the way the OP described them. Still think it was a fantastic post as he brings up a very real problem. Just that these are some of the challenges involved in solving it. They are not insurmountable though if one is willing to put in the extra workJ

Link to comment
Share on other sites

We discussed this in Analog's thread awhile back, but I'll reiterate some of the same points: when people have a strong emotional stake in something-- all rationality goes out the window. Not only that, they cant have any room for uncertainty in their lives-- it's like some kind of immediate gratification frenzy where they "have to" take a given model run as is (whether positive or negative)..... they dont even consider its biases, trends, or anything else..... they just take "face value" and make it reality in their minds, because it's the easiest thing to do. A more advanced case of it occurs when they let their irrationality actually influence their perception and see features on the models that simply arent there (I remember when we used to have unmoderated model threads awhile back, I would see different posters interpreting models in completely different ways-- completely opposite of each other, sometimes in back to back posts!) Of course, that makes them bipolar when the run to run variance is so high and their emotions are on a seesaw. It's hard for some people to see models as just one little piece of the puzzle (or, to be more accurate, the models' individual solutions are single pieces of the puzzle, some that fit and some that do not) while ensembles, which arent individual pieces of the puzzle, give you a fuzzy snapshot of what the whole puzzle might look like..... and thus can be more valuable than individual pieces. I apologize in advance for the imperfect analogy lol :P

And while we're on that subject, we can start another discussion about how people take analogs too literally also. It's another function of that instant gratification fetish-- what they fail to understand is that so much of what occurred in a given season is based on randomness (basically a small amount of chaos can cause large downstream changes) that analogs are no more than pieces of the puzzle also. A storm can miss a given location by 50 miles and still be a perfectly good analog because of the basic set up-- and that's just one storm. Just imagine how much chaos increases over a whole season! And while having multiple analogs does make the picture a bit clearer-- the "noise" of chaos will always exist and will need to be accounted for in variance.

Totally agree with this general assessment. Human psychology plays a huge role in this, and bias in weather forecasting generally has undesirable effects. I have seen it go multiple ways with complete irrational belief in one model to the point people will "force" the model to fit their desired solution eve if it suggests otherwise. On the other hand, I have seen professional meteorologists often "give way" to the ECMWF because of its perceived notion of being the "king" of all models. While a forecaster/meteorologist may statistically verify better using the ECMWF, it is unlikely this will necessary result in a better forecast. These are just a couple examples, of course, and the misuse of numerical guidance spans the spectrum of abuse. Quite honestly no forecaster/meteorologist is completely innocent in this regard, and I often need to remind myself to truly analyze the situation before caving to a particular piece of guidance and/or solution.

Link to comment
Share on other sites

Killer post in the beginning man, just awesome!

These are the things I miss. Dealing in energy, with a focus more on longer term trends, I miss out on partaking in some of the short term fun. Don't get me wrong, these storms are still big in the energy world, but I don't have the time to devote as much attention to the important small scale details. Love what I do now, but I do miss some of the "nowcasting" fun like you described in the beginning.

The best part of weather, in my opinion. Trying to forecast how all these complex interactions develop is one thing, but watching and learning is the best. Personally I find weather and atmospheric processes quite humbling when you watch it in motion.

great post as usual BI

defintely agree with you on the thread title, im a modelologist (deifntely not a meteorolgist, those equations are insane :lol: )....when i first got into following the models more closely, i used to bash them all the time, esp the NAM. now i dont as ive learned more about them and what to expext from them, bias wise. i dont know much, thats for sure, but bashing the models seems pointless, i dont even agree with bashing the GFS on its latest performance......

as LEK (George) points out, they are tools that should be used approrpiately.

Agreed. All guidance has its uses--and I hope folks will reconsider the way they make use of numerical guidance, whether it is forecasting or simply tracking the weather. Hopefully everyone will take the extra time to learn about how numerical models work so they can both use them more effectively, but perhaps, so they won't be so quick to bash the guidance without knowing how it works.

Link to comment
Share on other sites

Good post though the non-hydrostatic models had a pretty high bias down this way (DC area) on a couple of runs but basically agree with your comments. I think part of the problem is people tend to see things in black or white. There was always potential for a big storm north of DC but even that wasn't assured. It took the stronger 500h circulation and one that didn't' get too far north. Models are tools and unlike people are much better at figuring out the non-linear feedbacks that typically occur when a major storm is in the offing.

That wasn't just in your area, either. Basically, they were way too high anywhere south of the NJ/NY border.

Link to comment
Share on other sites

I have to add my two cents on this topic....depending on which part of the country you are from....the models did A-OK or they are 100% flip flop within a very short time. Their prediction of the west coast last weekend is case in point. Based on what happend out here I will never trust any of them again...EVER more than 24 hours out.!

I don't pay that much attn to the PacNW, so I'll ask you. Were the models predicting a major snow event in Seattle that ended up being mostly rain?

Link to comment
Share on other sites

Simply an outstanding post..well done. You make excellent points. I wasn't forecasting this one officially but it was interesting to follow the models and see how it played out. This event shows that the NAM can really be an excellent model in certain situations.

I'm still waiting for the 20" of snow the NAM gave this area.

Link to comment
Share on other sites

I'm just saying the NAM wasn't very good for this general area, while the GFS was. It was because of the GFS never wavering that I kept my snow forecast lower for this area, and it's a good thing I did.

This is somewhat misleading, since the GFS was too weak and too far east with the coastal track....so it got some of the answer right (on the western periphery) for the wrong reason. However, it was very stubborn in hinting that the Mid-Atlantic wouldn't get in on the action much....which was useful in of itself (as you point out, it was useful guidance and suggested to keep forecast totals down in some places).

Link to comment
Share on other sites

This is somewhat misleading, since the GFS was too weak and too far east with the coastal track....so it got some of the answer right (on the western periphery) for the wrong reason. However, it was very stubborn in hinting that the Mid-Atlantic wouldn't get in on the action much....which was useful in of itself (as you point out, it was useful guidance and suggested to keep forecast totals down in some places).

I think the short range models have a tendency to smooth out their qpf fields which is why they were too bullish on the western fringe. They were right on the track and the qpf max, but if experience has taught us anything, it's that rarely do you get such a large area of high qpf in an intense storm bombing just offshore; these storms are more prone to banding with areas of higher qpf sandwiched in between with areas of lower than forecast qpf. They also have sharper than modeled cut offs. I noticed that Upton applied this theory also, as they used the NAM's track and qpf maxes in banding, while using the GFS as the model of choice for the fringe areas. As you said, it was right for the wrong reasons. I said this before the event started and I'll say it again: A wise move by Upton in applying model physics. If you want a storm that gives a big snow dump over a much larger area, you need a weaker system that overruns a large dome of Arctic air.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...