Jump to content
  • Member Statistics

    17,610
    Total Members
    7,904
    Most Online
    Chargers10
    Newest Member
    Chargers10
    Joined

Why are models so bad?


Recommended Posts

Another thread the needle phase event. It should be worth noting that even as the models began to "converge" a little yesterday at 0-12Z on a better solution, they were also continuing the weakening and slowing trend with the southern PV, enough so it showed us as forecasters that the window of opportunity/timeframe for the phase to occur was shrinking if a good outcome was to be expected since it was obvious there would be weak GOM cyclogenesis. The UK showed that when it had the PV way down into the GOM yet the solution was out-to-sea. NAM/GFS to a degree showed that a solution N of the GOM would also yield a solution that would hook too late given the orientation of the cold air advection to the gulf stream. It is the probability of an event discussed in the forecaster bias thread, and this is a good example model "convergence" or a developing consensus does not necessarily equate to a higher probability of an event.

The models were not bad with this system. Once again, any good forecasters knew based on the situation it was still a rather low probability of an even--likely 30% or less. A lot of folks were cautioning on this and saw the potential areas that could throw this system off course and OTS, and they were right to have a low probability as a result. The models did fine with this system. The forecasters who promised east coast destruction? Perhaps not.

I think this is true. I am sorely disappointed by the diminishing potential of a HECS/MECS event and I want to lash out at the models (the EURO) but they did predict this storm well in advance and we all know the track and intensity 5-7 days out is probably going to change. I think people are giving the euro too much credit though. Is it the best model? Yes but that does not mean it is right and the other models are wrong in individual cases? That because it shows the same thing 5 times in a row 6-4 days out it must come to pass? Not really. In the long run it statistically out performs the other models but they are all continually getting better.

Two weeks ago it had two consecutive runs showing a coastal bomb and this weak it had 5 or so showing another one. I hope this isn't going to be a continued pattern because us weenies won't be able to take much more of this....... :arrowhead:

THen again, I hope it goes back to showing a coastal bomb tonight.... :whistle:

Link to comment
Share on other sites

  • Replies 218
  • Created
  • Last Reply

I've been saying for a long time that the Euro is vastly overrated and has been regularly outperformed by the American data over the past 4-5 years or so. The proof is in the pudding if one merely goes back and looks at each and every major winter event and compares the model data. Last year the Euro wasn't even close a lot of times, not that the GFS was either, but it was generally better, especially with deform setup, banding, meso stuff etc

Link to comment
Share on other sites

I tend to think the current pattern is a part of the models being so crazy.I also think some

of us really want to think this it, this time it's coming and it's all gonna center and fall IMBY.

When it comes to monster storms in one way shape of form,we all want in. I have been asking how/why certain years seem off the charts and others are garbage.The models are what I would call constant,the only thing just doing the same thing everyday basing it off from a constant change and trying to show the differences they (sometimes) see.I really thing the pattern is the wrench in the machine.

Link to comment
Share on other sites

I've been saying for a long time that the Euro is vastly overrated and has been regularly outperformed by the American data over the past 4-5 years or so. The proof is in the pudding if one merely goes back and looks at each and every major winter event and compares the model data. Last year the Euro wasn't even close a lot of times, not that the GFS was either, but it was generally better, especially with deform setup, banding, meso stuff etc

You need to go back through this thread and find the graph that clearly shows how the Euro is regularly out-performing the GFS.

Link to comment
Share on other sites

Yes, not all "similar" 500 mb synoptics are created the same and certainly do not produce similar surface features. Grading the performance of the various models has got to be a work in progress. There are so many variables (obviously) that one could focus on to determine verifications and errors, but what most people want to know is the surface weather. Analyzing verifications at select and random locations at various times of the forecast cycle might actually be an exercise in excellence. Modeling has come a long way over the years, but there is still so much more room for improvement and probably will be for many years to come. Models for forecasters, whether they be short, medium, or long range prognosticators should be used as guidance as they are intended and then the individual needs to hone the actual sensible weather forecast based off not solely this guidance but many of other variables as well including one's experience.

Link to comment
Share on other sites

Yes, not all "similar" 500 mb synoptics are created the same and certainly do not produce similar surface features. Grading the performance of the various models has got to be a work in progress. There are so many variables (obviously) that one could focus on to determine verifications and errors, but what most people want to know is the surface weather. Analyzing verifications at select and random locations at various times of the forecast cycle might actually be an exercise in excellence. Modeling has come a long way over the years, but there is still so much more room for improvement and probably will be for many years to come. Models for forecasters, whether they be short, medium, or long range prognosticators should be used as guidance as they are intended and then the individual needs to hone the actual sensible weather forecast based off not solely this guidance but many of other variables as well including one's experience.

Do you remember the winter of 2001-02? JB kept claiming he was "correct at 500 MB" for the actual pattern, yet the pattern he was forecasting was cold and snowy. Obviously, that was FAR from reality in 2001-02.

Link to comment
Share on other sites

Agree. That is precisely my point. How the upper atmosphere manifests itself at the surface is the ultimate goal of weather prediction. It does seem easier to model the upper atmoshete. But to actually pinpoint surface events is still somewhat unattainable. That, in my humble opinion, is still a desired goal for the long and medium range to specifically and accurately foretell the sensible weather.

Link to comment
Share on other sites

Questions: (if this should be posted in another place, please do so or let me know)

I worked for the Nielsen company that does alot of modeling (but obviously not weather) and have experience with having to explain models as well.

1) The models have known biases - are they ever tried to be corrected for or to expensive and could lead to unknown biases?

2) IF the 12Z GFS run today was discounted - was it ever told to the meterologists who have to make local forcasts why or what they thought was wrong? That does not seem to have been done or maybe it has and we are not aware of it?

3) IF the 12Z GFS was discounted wouldn't it make sense to not include any of its data in the later runs of any models and perhaps go back to earlier runs of the GFS, etc.? It seemed that there were concerns about the SREFs, 18Z NAM, etc that could still be impacted by that run.

Thanks for any answers that could be provided.

P.S. Perhaps there are sites that explain this.

Link to comment
Share on other sites

I believe they're all graded at 500 MB, which could be a bad idea, too. A model could look one way at 500 MB and spit something totally different out at the surface, we've seen that already this winter.

But it can't be that much off from the surface and still nail 500. A storm is not going to track wildly differently with the same 500 mb pattern.

Link to comment
Share on other sites

  • 4 weeks later...

A question about the DGEX.

Someone, I think at Eastern, told me higher resolution models were important in forecasting out West, where the topography was so varied, when I asked about a high resolution msesoscale model initialized off the prediction of a lower resolution global model.

No matter how well finer grid resolution can see mountains, if the initilization is off a coarser model, and its at a future time so finer resolution actual data can't be ingested, what is the point?

Would it make more sense to expand the field of the current NAM, to cover more of the Atlantic and Pacific and over the North Pole into Northern Asia/Europe, and maybe run it out a little longer, and make up for the extra computer time by dropping the DGEX altogether? Maybe run the expanded NAM twice a day and the current NAM twice a day?

I must say, I have no idea how computationally expensive a particular model is, and what resources NCEP has. I know finer resolution/longer run time/bigger area is more computing power, but I'm clueless as to quantifying that.

Just wondered again seeing people commenting on an AccuWx met who apparently likes the DGEX.

Link to comment
Share on other sites

A question about the DGEX.

Someone, I think at Eastern, told me higher resolution models were important in forecasting out West, where the topography was so varied, when I asked about a high resolution msesoscale model initialized off the prediction of a lower resolution global model.

No matter how well finer grid resolution can see mountains, if the initilization is off a coarser model, and its at a future time so finer resolution actual data can't be ingested, what is the point?

Would it make more sense to expand the field of the current NAM, to cover more of the Atlantic and Pacific and over the North Pole into Northern Asia/Europe, and maybe run it out a little longer, and make up for the extra computer time by dropping the DGEX altogether? Maybe run the expanded NAM twice a day and the current NAM twice a day?

I must say, I have no idea how computationally expensive a particular model is, and what resources NCEP has. I know finer resolution/longer run time/bigger area is more computing power, but I'm clueless as to quantifying that.

Just wondered again seeing people commenting on an AccuWx met who apparently likes the DGEX.

The DGEX came to life as a way of helping to fill the forecast grids as part of the NWS NDFD movement (NDFD info here). It was one of the first downscaling (i.e. translating information from coarse grid models to finer grids) tried in an operational capacity (many other things have been developed and utilized since). In general, I believe that it has been pretty successful/useful, especially out west....but as actual medium range guidance it isn't particularly useful (because of the very small domain and reliance on boundary conditions).

To answer the NAM specific question, there is a plan to increase the spatial resolution, but probably not run it longer (that's not what it's designed for.....though there are folks working on/testing a global version of the NMM, but I digress). Also, the NAM domain is already fairly large for a regional/mesoscale model (http://www.emc.ncep.noaa.gov/mmb/namgrids/g110.12kmexp.jpg).

If you have other specific question, feel free to ask (here or PM). I can't believe I posted in this crappy thread yet again...I'm so ashamed :arrowhead:

Link to comment
Share on other sites

I believe they're all graded at 500 MB, which could be a bad idea, too. A model could look one way at 500 MB and spit something totally different out at the surface, we've seen that already this winter.

In terms of 500 hPa AC, this is one of the WMO standards for doing intercomparison of NWP skill between international centers. This isn't the only metric used, though it's one of the things we typically look at. B_I correctly points out elsewhere that good 500 hPa scores in a time mean sense doesn't necessarily mean anything in terms of forecasting individual (particularly high impact) events.

I shouldn't even need to address this silliness regarding surface reflection and 500 hPa evolution ....QPF is one thing, especially in a multi-day/medium range forecast...NWP models are going to produce fields that they evolve in a dynamically consistent manner (there could be issues related to sub-grid scale processes and parameterizations)....but this notion that "the surface looks off" is rubbish. 500 hPa is NOT the only level to look (the atmosphere is in fact a three dimensional fluid) at nor is it the "driver" for everything that happens.

Link to comment
Share on other sites

  • 3 weeks later...

One thing we need to remember as well is that forecasters, like models, have inherent biases that cloud their judgment. I spoke with a coworker about this issue a few weeks ago and stated that atmospheric modeling has gotten better while we, as a collective whole, have gotten worse. I say we've gotten worse because of informational overload and too much emphasis placed on QPF forecasts. Instead of a few models to look at every 12 hours, we now have a dozen or so models, some run 4x daily, and a collection of ensembles each based off a given model. Sometimes we get too involved in specifics and forget to look at the larger theme, especially beyond 24 hours. This is all not meant to suggest that our actual forecasts are worse on a day-to-day basis, but we're more reliant on modeled surface data than ever before and this can lead to a warped perception of reality and intellectual atrophy. Forecasters need to adjust and become better disciplined in their approach instead of becoming disillusioned because a particular piece of guidance "led them astray."

Link to comment
Share on other sites

One thing we need to remember as well is that forecasters, like models, have inherent biases that cloud their judgment. I spoke with a coworker about this issue a few weeks ago and stated that atmospheric modeling has gotten better while we, as a collective whole, have gotten worse. I say we've gotten worse because of informational overload and too much emphasis placed on QPF forecasts. Instead of a few models to look at every 12 hours, we now have a dozen or so models, some run 4x daily, and a collection of ensembles each based off a given model. Sometimes we get too involved in specifics and forget to look at the larger theme, especially beyond 24 hours. This is all not meant to suggest that our actual forecasts are worse on a day-to-day basis, but we're more reliant on modeled surface data than ever before and this can lead to a warped perception of reality and intellectual atrophy. Forecasters need to adjust and become better disciplined in their approach instead of becoming disillusioned because a particular piece of guidance "led them astray."

The solution: Do what I do and just look at a few select models. I only use the GFS, NAM, SREF and ECMWF. Ensembles are good to use for determining uncertainty in the short range and the most favored forecast in the medium range, but otherwise I just stick to the operationals. Spreading time and effort at looking at all of the different models is just a waste. A lot of people (weenies) on here do it just so they can look at every solution and then wishcast off of a model that shows their favored solution. The truth is it's a lot quicker and easier to make an accurate forecast based off of a few models that you know really well and can adjust for.

Link to comment
Share on other sites

The solution: Do what I do and just look at a few select models. I only use the GFS, NAM, SREF and ECMWF. Ensembles are good to use for determining uncertainty in the short range and the most favored forecast in the medium range, but otherwise I just stick to the operationals. Spreading time and effort at looking at all of the different models is just a waste. A lot of people (weenies) on here do it just so they can look at every solution and then wishcast off of a model that shows their favored solution. The truth is it's a lot quicker and easier to make an accurate forecast based off of a few models that you know really well and can adjust for.

You never even glance at the GGEM or UKMET?

Link to comment
Share on other sites

Lol, pretty much same here. If those two models ever start showing me that they have a clue, maybe I'll pay attention.

Well then you should have been paying attention for a while now (particularly the UKMet.....perhaps not for specific types of events [EC Cyclogensis?], but it's a pretty darn good modeling/analysis system now overall).

Link to comment
Share on other sites

Well then you should have been paying attention for a while now (particularly the UKMet.....perhaps not for specific types of events [EC Cyclogensis?], but it's a pretty darn good modeling/analysis system now overall).

This... I don't use the UKMet because it's bad (it's actually good), but I just don't have the knowledge of it's intricacies and I find that what I do is a good, efficient way to get things done.

Link to comment
Share on other sites

The Canadian develops a lot of spurious tropical cyclones, however it seems to start predicting actual cyclone development earlier than the other globals. It can be the first to give a heads up to look someplace, IMHO, as an amateur, and if the Canadian doesn't show development, than development is usually unlikely.

I know, this started as a Winter season thread.

Link to comment
Share on other sites

The solution: Do what I do and just look at a few select models. I only use the GFS, NAM, SREF and ECMWF. Ensembles are good to use for determining uncertainty in the short range and the most favored forecast in the medium range, but otherwise I just stick to the operationals. Spreading time and effort at looking at all of the different models is just a waste. A lot of people (weenies) on here do it just so they can look at every solution and then wishcast off of a model that shows their favored solution. The truth is it's a lot quicker and easier to make an accurate forecast based off of a few models that you know really well and can adjust for.

I don't think that's necessarily the best approach, never mind you already cited four models (and I will assume MOS) and ensembles which in itself is a good percentage of available guidance. I think taking a casual glance at other models (especially the GGEM) can give an idea of uncertainty just as well, if not better than the ensemble members of your "model of choice." That said, pouring over each run and getting into the fine details of each is where forecasters can easily find themselves in a less than optimal situation from a time management standpoint at the very least.

With regard to the model you know really well, I'd argue it's hard to "know" any particular model these days given how frequently upgrades are made. BUT, model resolution and convective parametrization schemes can typically be counted on to produce a certain result (themes, not specifics) in a particluar forecast environment, and so it's usually a fairly easy decision to go with model A over model B when taking that into account. CAD is a good example. It gets tricky when you're talking about a shortwave on day five and where it will be.

Link to comment
Share on other sites

One thing we need to remember as well is that forecasters, like models, have inherent biases that cloud their judgment. I spoke with a coworker about this issue a few weeks ago and stated that atmospheric modeling has gotten better while we, as a collective whole, have gotten worse. I say we've gotten worse because of informational overload and too much emphasis placed on QPF forecasts. Instead of a few models to look at every 12 hours, we now have a dozen or so models, some run 4x daily, and a collection of ensembles each based off a given model. Sometimes we get too involved in specifics and forget to look at the larger theme, especially beyond 24 hours. This is all not meant to suggest that our actual forecasts are worse on a day-to-day basis, but we're more reliant on modeled surface data than ever before and this can lead to a warped perception of reality and intellectual atrophy. Forecasters need to adjust and become better disciplined in their approach instead of becoming disillusioned because a particular piece of guidance "led them astray."

Perfectly said (bolded). Models are guidance. They are pretty darned impressive in what they can do--but they aren't perfect by any stretch. Meteorologists who think they are--or should be--are limiting themselves significantly. We need to get back to a day when we actually analyze patterns and break them down in a synoptic/dynamic manner; the atmosphere does not follow the rules of any human developed numerical model.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...