Jump to content
  • Member Statistics

    17,610
    Total Members
    7,904
    Most Online
    Chargers10
    Newest Member
    Chargers10
    Joined

Why are models so bad?


Recommended Posts

  • Replies 218
  • Created
  • Last Reply

This is one thing I don't fully understand regarding the human drive towards perfection...variability is what makes everything so much better. A day we have perfect models will be a day where life just became suddenly more boring. This applies, as you said, to almost everything in science. We don't need to be perfect or have perfect solutions to everything.

A day that we have near-perfect models, though (true perfection is impossible), is a day that we start having more fun with storms 60-90 days out like we do now with 10-14 days.

Higher resolutions, better satellites (about the only way to gather super-high-resolution analysis data), and 4DVAR would all improve models. Solving the full Navier Stokes equations would help too. :)

Link to comment
Share on other sites

A day that we have near-perfect models, though (true perfection is impossible), is a day that we start having more fun with storms 60-90 days out like we do now with 10-14 days.

Higher resolutions, better satellites (about the only way to gather super-high-resolution analysis data), and 4DVAR would all improve models. Solving the full Navier Stokes equations would help too. :)

Just for the record, I'm not saying models should be near perfect. I'm not saying models should be great 4+ days out. What I'm saying is just that models SHOULD be very, very good inside 72 hours, and that models should not waver from run to run more than 50-100 miles inside 72 hours.

Link to comment
Share on other sites

Just for the record, I'm not saying models should be near perfect. I'm not saying models should be great 4+ days out. What I'm saying is just that models SHOULD be very, very good inside 72 hours, and that models should not waver from run to run more than 50-100 miles inside 72 hours.

facepalm.png

They are already. As for wavering inside 72 hours, that is going to happen in a case of positive feedback cyclogenesis where tiny errors grow exponentially with time. I think you need to read the books posted by some of the posters in this thread. There exists a fair amount of variability and chaos in this world, and there isn't anything you or I can do about it. Once you realize that, you will see your arguments in this thread hold little weightbiggrin.gif

Link to comment
Share on other sites

facepalm.png

They are already. As for wavering inside 72 hours, that is going to happen in a case of positive feedback cyclogenesis where tiny errors grow exponentially with time. I think you need to read the books posted by some of the posters in this thread. There exists a fair amount of variability and chaos in this world, and there isn't anything you or I can do about it. Once you realize that, you will see your arguments in this thread hold little weightbiggrin.gif

They have been awful inside 96 hours this month. I think inaccuracies like we've just experienced are inexcusable in the year 2010.

Link to comment
Share on other sites

A 100 mile shift in 24 hours is fine. A 700 mile shift in 12 hours is disturbing.

This is where the recognition of the fragility of the pattern comes into play. Just because the models are shifting wildly does not mean the models are getting worse. The statistics are the statistics. The models have not gotten any worse. What may be getting worse is people's ability to analyze and understand them, and to use them as they are intended. Guidance, guidance, guidance.

I'm not here to say that this weekends storm didn't feature a wild or dramatic swing in the forecast models. It was definitely a bit wilder than we have seen recently. That being said, check out the pattern. There's an extremely fast pacific flow, an unusually large and anomalous polar vortex, a southern stream system, 50 shortwaves over the Great Lakes, a baroclinic zone off the East Coast...the list goes on and on. This thread doesn't really have much basis or argument.

Link to comment
Share on other sites

This is where the recognition of the fragility of the pattern comes into play. Just because the models are shifting wildly does not mean the models are getting worse. The statistics are the statistics. The models have not gotten any worse. What may be getting worse is people's ability to analyze and understand them, and to use them as they are intended. Guidance, guidance, guidance.

I'm not here to say that this weekends storm didn't feature a wild or dramatic swing in the forecast models. It was definitely a bit wilder than we have seen recently. That being said, check out the pattern. There's an extremely fast pacific flow, an unusually large and anomalous polar vortex, a southern stream system, 50 shortwaves over the Great Lakes, a baroclinic zone off the East Coast...the list goes on and on. This thread doesn't really have much basis or argument.

---AO, Pineapple Express... both are often underestimated by the models (for proof see much of the 2009-2010 winter).

Link to comment
Share on other sites

This is where the recognition of the fragility of the pattern comes into play. Just because the models are shifting wildly does not mean the models are getting worse. The statistics are the statistics. The models have not gotten any worse. What may be getting worse is people's ability to analyze and understand them, and to use them as they are intended. Guidance, guidance, guidance.

I'm not here to say that this weekends storm didn't feature a wild or dramatic swing in the forecast models. It was definitely a bit wilder than we have seen recently. That being said, check out the pattern. There's an extremely fast pacific flow, an unusually large and anomalous polar vortex, a southern stream system, 50 shortwaves over the Great Lakes, a baroclinic zone off the East Coast...the list goes on and on. This thread doesn't really have much basis or argument.

The way I look at it, when there are such dramatic shifts it says much more about the pattern rather than the models. The margin of error is much larger than usual because the pattern is so anomalous. Could the models use improvement? Sure-- but that will come with time and improvement in humanity's knowledge of how the atmosphere and ocean deal with the transfer of energy between them.

But there will always be some uncertainty-- why? Because nature doesnt always adhere to cause and effect.

Link to comment
Share on other sites

Just want to say, I'm finding this a really informative thread. Fascinating perspectives. What a great range of expertise.

I agree. Even though the original premise of the thread may have been created out of frustration, the fact is that it's been an extremely educational and enlightening experience to see the "insiders" weigh in with the inner workings of computer simulations that we all take for granted.

Link to comment
Share on other sites

A day that we have near-perfect models, though (true perfection is impossible), is a day that we start having more fun with storms 60-90 days out like we do now with 10-14 days.

Higher resolutions, better satellites (about the only way to gather super-high-resolution analysis data), and 4DVAR would all improve models. Solving the full Navier Stokes equations would help too. :)

Fun--perhaps. But within the limits of chaos and probabilistic variability, the accuracy wont be very high that far out in time. But that just makes it more fun :) Like a detective story where you dont know how it will end lol.

Im still convinced that, no matter how much advancement we make, there will always be compromises that need to be made.... for example-- to go that far out in time, you will need to lower the resolution. If you have ultra high resolution in the future, you will have to deal with shorter lead times-- otherwise, your system will be engulfed and bogged down with a ton of "noise" (lower signal to noise ratio.) Consider that the gift of spatial-temporal coupling.... what you get from one, the other takes away lol.

But I agree that there will be continual steady improvement and, especially with the help of ensembles which help us simulate some of the inherent variability of the atmosphere, we can improve the probabilities of making decent long range forecasts-- within certain built-in limits.

Link to comment
Share on other sites

Can i make an arguement that the models did pretty well in general on this storm? Other than a few fantasy range hits, euro had a hiccup run on 12z a few days ago. CMC showed a decent hit for 1 run, NAM never gave in, GFS caught on. What gave us hope for a big hit was the 12z euro run, to be honest. Alot of people wanted this storm so bad that it felt like the models did horrible when they all came into agreement yesterday on an ots storm. Just my 0.02.

Link to comment
Share on other sites

The red flag for me was the wide range of solutions by the guidance from the 12z runs to the next 00z runs. In that setup, it was clear that they were struggling big time. If you look at all the things going against the pattern for a big storm, as mentioned by others in this thread, why would you expect anything else? Pattern recognition, for me, is the most important thing for when I forecast. Its not always about what the model says, but instead, does the pattern fit what the model spits out? Like some have said in this thread....it is easy to go all gung ho on a storm if there is enough model support. But what about the next runs to follow? Do you ever consider it may change like it had done so before? The OP obviously got burned on his forecast, but this was not the place to say what he did. Its like DT has said before, if you are a MET and can't live with being wrong, then you are in the wrong business. Everyone busts, whether it be b/c a model was wrong, or your analysis of the guidance was wrong. It doesn't warrent a thread to b*tch about why the models are no good. Just go back to the old barotropic model or the LFM and compare the guidance we have today to them. I remember when I first started a job in MET and my boss told me how lucky we were to have the great models of today compared to the ones he had in his day. To me, if you know a models' bias, then you adjust for it. The same goes for a pattern. If the pattern like we have now says watch out the guidance will struggle, then you have to expect what went on with this storm. To act all surprised otherwise is just foolish.

Link to comment
Share on other sites

The ensembles did show many solutions; the models went back and forth this told me and the NWS that the confidence level was very low, A little change meant a lot, now that we are closer to the event all the models put it was east of the I-95. This is very much like last February's pattern, back then the models also had trouble. So some of the blame is the pattern not the models.

Link to comment
Share on other sites

I think the answer is because these models are essentially stupid robots with too few sensors. Many respected posters have said here that a human forecaster still makes a better forecast by using myriad inputs, experience, art. etc. It says to me that single model outputs are poor at point forecasting compared to some solution that considered more sources.

Link to comment
Share on other sites

Can i make an arguement that the models did pretty well in general on this storm? Other than a few fantasy range hits, euro had a hiccup run on 12z a few days ago. CMC showed a decent hit for 1 run, NAM never gave in, GFS caught on. What gave us hope for a big hit was the 12z euro run, to be honest. Alot of people wanted this storm so bad that it felt like the models did horrible when they all came into agreement yesterday on an ots storm. Just my 0.02.

I agree and I think the "big picture" message to take from what happened is that people tend to concentrate on single model runs too much and if you just go by that-- yes, they werent very good. But that's not the way it's supposed to work. Modeling doesnt exist in a vacuum and was never meant to be that way. Each model run is a very small piece of the puzzle, not the whole puzzle itself-- much less the solution. You have to look at all the accumulated runs, plus the ensembles-- and then you realize what the models are trying to tell you-- like others have said, the pattern was and is very volatile and fragile. THIS WAS THE MESSAGE THE MODELS WERE TRYING TO GET ACROSS AND IN THIS THEY WERE HIGHLY SUCCESSFUL. (I apologize for the caps; not yelling, just trying to emphasize.) The fact is the models are only guidance and not gospel, and someone with brains is supposed to interpret them as a sum total of many runs, not just take one run verbatim. And on top of all this, the global signals just weren't right so putting that into the mix should have made everyone realize how much of a long shot this really was-- regardless of what a particular model was saying. Computer models dont compute common sense, but that doesnt mean we shouldnt either ;)

The models' varying solutions were signaling caution right from the start and so I would say they really were successful. It's the people who interpreted them the wrong way who "failed." (I dont play the blame game though-- so I wouldnt use that word-- let's just say it should have been a learning experience for them and for all of us-- so this really isnt a "failure" for anyone.)

Link to comment
Share on other sites

It's the people who interpreted them the wrong way who "failed."

Right on. There's a difference between Meteorologists and "people who report what the models are indicating." Anyone with a few days training can do the latter. And the models are good enough at this point that 80% of the time, four or five days out, they'll be right. But the 'art' of meteorology lies in that last 20%.

Link to comment
Share on other sites

A day that we have near-perfect models, though (true perfection is impossible), is a day that we start having more fun with storms 60-90 days out like we do now with 10-14 days.

Well, if you believe Lorenz, our maximum limit of accurate predictability is about two weeks for baroclinic systems. And that's assuming the best possible model you can realistically construct.

Link to comment
Share on other sites

The ensembles did show many solutions; the models went back and forth this told me and the NWS that the confidence level was very low, A little change meant a lot, now that we are closer to the event all the models put it was east of the I-95. This is very much like last February's pattern, back then the models also had trouble. So some of the blame is the pattern not the models.

last year the models handled that pattern much better overall. The feb 5th storm was one of the best long range forecasts of amjor dc area snowstorm that I can remember. It was on the radar when it was snowing Jan 31. The feb 10th was not quite as well forecast but still was way better than the last two this year. Maybe later in the month there was a northward shift to some of the storms but overall, they were forecast really well compared to this year. I guess our memories are different because of the difference in perceptions based on where we live.

Link to comment
Share on other sites

Can i make an arguement that the models did pretty well in general on this storm? Other than a few fantasy range hits, euro had a hiccup run on 12z a few days ago. CMC showed a decent hit for 1 run, NAM never gave in, GFS caught on. What gave us hope for a big hit was the 12z euro run, to be honest. Alot of people wanted this storm so bad that it felt like the models did horrible when they all came into agreement yesterday on an ots storm. Just my 0.02.

I think so too.

Perhaps I'm starting to gain some experience when it comes to model watching, but I never expected this storm to bring any snow at all to the NYC area or anyone near or NW of that zone. Most of the model runs were offshore, especially the EC, despite a few blips. Most storms are going to have extreme forecast hiccups before the event. The key is to note the overall trend or consistency.

Link to comment
Share on other sites

one good thing about better models..say 24-48 hours..you don't have heavy snow warnings that end up as partly cloudy the next morning anymore..now that's a letdown!..happened many times in the 60's and early 70's

And then there was January 13, 2008.

My question is what is it about an anomalous pattern such as the current one that give models fits. It can't be the physics of it since that stuff is hard science. Is it simply an opportunity to introduce different kinds of errors due to gaps in data sampling, etc. or is it something more than that?

Link to comment
Share on other sites

It happened many times in the 80s...its amazing that with the LFM going away and the ETA replacing it as well as the AVN probably getting more usage than the NGM how those busts dropped off markedly in the early 90s as I posted in the beginning of this thread...the LFM basically had one great score, the Thanksgiving event in 1989 which it had well in advance....the busts now more come due to warm air advection errors where snow is forecast and you get more sleet or freezing rain, but yeah, snow forecasts turning into partly cloudy are rare, of course you have the December 2000 incident but they're pretty infrequent now.

If I'm not mistaken, the LFM's greatest hit was the February Blizzard of 1978.

Link to comment
Share on other sites

If I'm not mistaken, the LFM's greatest hit was the February Blizzard of 1978.

Ed, Ive always wondered about that. Terrible bust with the east coast blizzard in Jan 1978 and its greatest victory in Feb 1978. Cant get any more disparate than that.

Link to comment
Share on other sites

And then there was January 13, 2008.

My question is what is it about an anomalous pattern such as the current one that give models fits. It can't be the physics of it since that stuff is hard science. Is it simply an opportunity to introduce different kinds of errors due to gaps in data sampling, etc. or is it something more than that?

That may be a small part of it. But the nature of chaos is probably the driver that creates these "fits". Just look at the verification scores for the models...note that they have "ups" and "downs", and I suspect that the "downs" are during more chaotic type flow regimes.

Quick analogy: Take a bottle (that floats) and place it in a fairly slow, flat, steady stream and count to 20....note where it ends up at T+20.....do that 10 times and the "end points" will encompass a certain circle. Do the same thing but in a stream with rapids, eddies and curves, and I suspect that circle would be quite a bit larger.....

Point is, the atmosphere has varying states encompassing various degrees of chaos at any given time, and since the models remain the same, expecting performance to remain the same when added chaos is introduced is an expectation surely to disappoint.

Couple that with your noted point, and those errors are then magnified. Remember, initialization of ANY model is a "best approximation" state. It is why scrutiny of the T+0 panels is conducted at every run by the HPC at NCEP. It in effect, tries to warn forecasters when and why there may be larger error potential at T+xx.

Link to comment
Share on other sites

Feb 1978 was first predicted by the 72hr prog then LFM and it showed a strong indication of an East coast development on Friday before the storm.

I remember noaa weather radio the day before saying we could see one of the biggest snowstorms in NYC history...Boy were they right!...

Link to comment
Share on other sites

Guest someguy

dude... you are rapidly climbing up the list of DT's favorite Non mets posters...

Man you clearly have your sh!t in One sock .

You are darn smart insightful poster

I agree and I think the "big picture" message to take from what happened is that people tend to concentrate on single model runs too much and if you just go by that-- yes, they werent very good. But that's not the way it's supposed to work. Modeling doesnt exist in a vacuum and was never meant to be that way. Each model run is a very small piece of the puzzle, not the whole puzzle itself-- much less the solution. You have to look at all the accumulated runs, plus the ensembles-- and then you realize what the models are trying to tell you-- like others have said, the pattern was and is very volatile and fragile. THIS WAS THE MESSAGE THE MODELS WERE TRYING TO GET ACROSS AND IN THIS THEY WERE HIGHLY SUCCESSFUL. (I apologize for the caps; not yelling, just trying to emphasize.) The fact is the models are only guidance and not gospel, and someone with brains is supposed to interpret them as a sum total of many runs, not just take one run verbatim. And on top of all this, the global signals just weren't right so putting that into the mix should have made everyone realize how much of a long shot this really was-- regardless of what a particular model was saying. Computer models dont compute common sense, but that doesnt mean we shouldnt either ;)

The models' varying solutions were signaling caution right from the start and so I would say they really were successful. It's the people who interpreted them the wrong way who "failed." (I dont play the blame game though-- so I wouldnt use that word-- let's just say it should have been a learning experience for them and for all of us-- so this really isnt a "failure" for anyone.)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...