weatherbob1234 Posted February 23, 2013 Share Posted February 23, 2013 Which two models is the best to compare the 0z,12z,18z,6z ? thanks, bobby Link to comment Share on other sites More sharing options...
kylemacr Posted March 1, 2013 Share Posted March 1, 2013 The 0z and 12z model runs have more data in them, for instance, the balloon launches. The 6z and 18z model runs can sometimes be thrown off by the lack of data, especially in low-predictability regimes. In reality, though, you should be really carefully comparing any runs with each other. The trend of the model, which we often call "dprog/dt" has little forecast value. There was an interesting paper on this some years ago, that everyone seems to ignore: http://www.esrl.noaa.gov/psd/people/tom.hamill/dprogdt.pdf Link to comment Share on other sites More sharing options...
weatherbob1234 Posted March 1, 2013 Author Share Posted March 1, 2013 well how do you know which model to use? Or do you have to compare all the runs to see if there is a trend? And in the winter time like now which is the best model to use? thanks, bobby Link to comment Share on other sites More sharing options...
ohleary Posted March 1, 2013 Share Posted March 1, 2013 The 0z and 12z model runs have more data in them, for instance, the balloon launches. The 6z and 18z model runs can sometimes be thrown off by the lack of data, especially in low-predictability regimes. In reality, though, you should be really carefully comparing any runs with each other. The trend of the model, which we often call "dprog/dt" has little forecast value. There was an interesting paper on this some years ago, that everyone seems to ignore: http://www.esrl.noaa.gov/psd/people/tom.hamill/dprogdt.pdf I have to disagree, I look at trends especially inside 60 hours. Also...it isn't true anymore that 00z and 12z have more data available. Link to comment Share on other sites More sharing options...
weatherwiz Posted March 2, 2013 Share Posted March 2, 2013 well how do you know which model to use? Or do you have to compare all the runs to see if there is a trend? And in the winter time like now which is the best model to use? thanks, bobby There is no one model to use. It's all about blending models together. However, if you can distinguish which model or models are handling the pattern the best you then gauge which model or models to use with higher confidence for that upcoming period. With this said, the Euro (ECMWF) is usually the model which receives the highest verification scores so many will tend to rely on the Euro more heavily, however, there are times where it can/will perform poorly. Link to comment Share on other sites More sharing options...
kylemacr Posted March 3, 2013 Share Posted March 3, 2013 I have to disagree, I look at trends especially inside 60 hours. Also...it isn't true anymore that 00z and 12z have more data available. I'm pretty sure that the 00z and 12z have to have more data available since the balloons aren't launched every 6 hours, and satellite data just isn't the same, unfortunately... Link to comment Share on other sites More sharing options...
dtk Posted March 3, 2013 Share Posted March 3, 2013 I'm pretty sure that the 00z and 12z have to have more data available since the balloons aren't launched every 6 hours, and satellite data just isn't the same, unfortunately... Based on what? Also, what about surface metar data (especially surface pressure), aircraft in-situ measurements, wind profiler, satellite derived atmospheric motion vectors, gps radio occultation, radar (vad winds), ships/buoys, in addition to the millions of satellite based IR and MW radiances? Raobs make up a very small (in number) portion of the observing system. Don't get me wrong, they are critically important, but we have millions of observations at 06z/18z as well. Link to comment Share on other sites More sharing options...
kylemacr Posted March 4, 2013 Share Posted March 4, 2013 Based on what? Also, what about surface metar data (especially surface pressure), aircraft in-situ measurements, wind profiler, satellite derived atmospheric motion vectors, gps radio occultation, radar (vad winds), ships/buoys, in addition to the millions of satellite based IR and MW radiances? Raobs make up a very small (in number) portion of the observing system. Don't get me wrong, they are critically important, but we have millions of observations at 06z/18z as well. My point is that the radiosondes provide very important observations and these special, relatively coherent, vertical profiles of the atmosphere can be critical during certain, especially unpredictable, patterns. One of the geographic areas that the sondes matter the most is the West Coast of the U.S. The Pacific is void of sondes and that can cause issues when features propagate east from the Pacific and are suddenly sampled by the sonde network. The usefulness of vertical observations is the reason that NOAA often launches dropsondes from their aircraft when high-profile storms are on their way. That said, I agree with your main point. Most of the data are the same between model runs and under many patterns, there won't be many differences with or without the sondes. However, there are many times when the sondes make a huge difference, and it's often difficult to know ahead of time when those times are. Link to comment Share on other sites More sharing options...
icebreaker5221 Posted March 4, 2013 Share Posted March 4, 2013 Based on what? Also, what about surface metar data (especially surface pressure), aircraft in-situ measurements, wind profiler, satellite derived atmospheric motion vectors, gps radio occultation, radar (vad winds), ships/buoys, in addition to the millions of satellite based IR and MW radiances? Raobs make up a very small (in number) portion of the observing system. Don't get me wrong, they are critically important, but we have millions of observations at 06z/18z as well. My point is that the radiosondes provide very important observations and these special, relatively coherent, vertical profiles of the atmosphere can be critical during certain, especially unpredictable, patterns. One of the geographic areas that the sondes matter the most is the West Coast of the U.S. The Pacific is void of sondes and that can cause issues when features propagate east from the Pacific and are suddenly sampled by the sonde network. The usefulness of vertical observations is the reason that NOAA often launches dropsondes from their aircraft when high-profile storms are on their way. That said, I agree with your main point. Most of the data are the same between model runs and under many patterns, there won't be many differences with or without the sondes. However, there are many times when the sondes make a huge difference, and it's often difficult to know ahead of time when those times are. Daryl (dtk) is right that radiosondes make up only a small fraction of the total observations, but it's important to remember that all these data sources have advantages and disadvantages. AMVs and radiances are available worldwide with fairly high spatial density, but it's hard to pinpoint the exact pressure level of a satellite-based wind or temperature measurement looking down through a column of atmosphere. I recall seeing a talk by the people at CIMSS that showed a considerable fraction of AMVs are off by 50+ mb or wind vectors were off by 30+ degrees radially, which is much larger than the errors we have with radiosondes. Metar, ship, bouy, etc are all great data sources, but only provide information at the surface, which only comprises a very minor fraction of the total atmosphere. Similarly, aircraft measurements tend to be confined to preferred flight routes at relatively fixed pressure levels, not a deep layer of atmosphere. Probably the biggest disadvantage of radiosondes is that their obs are spaced so few and far between, both geographically and temporally. However, Kyle - the differences in skill scores between the 06/18Z runs and the 00/12Z runs has been shown to be statistically insignificant. You can probably find the data here, less the significance testing: (http://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/). This is likely due to both the fact that radiosondes make up a small fraction of total data, in addition to the fact that the off-hour model "first guess" retains a lot of the information gained from radiosondes during the 00/12Z cycle due to advances in data assimilation and the models themselves. Link to comment Share on other sites More sharing options...
kylemacr Posted March 4, 2013 Share Posted March 4, 2013 Daryl (dtk) is right that radiosondes make up only a small fraction of the total observations, but it's important to remember that all these data sources have advantages and disadvantages. AMVs and radiances are available worldwide with fairly high spatial density, but it's hard to pinpoint the exact pressure level of a satellite-based wind or temperature measurement looking down through a column of atmosphere. I recall seeing a talk by the people at CIMSS that showed a considerable fraction of AMVs are off by 50+ mb or wind vectors were off by 30+ degrees radially, which is much larger than the errors we have with radiosondes. Metar, ship, bouy, etc are all great data sources, but only provide information at the surface, which only comprises a very minor fraction of the total atmosphere. Similarly, aircraft measurements tend to be confined to preferred flight routes at relatively fixed pressure levels, not a deep layer of atmosphere. Probably the biggest disadvantage of radiosondes is that their obs are spaced so few and far between, both geographically and temporally. However, Kyle - the differences in skill scores between the 06/18Z runs and the 00/12Z runs has been shown to be statistically insignificant. You can probably find the data here, less the significance testing: (http://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/). This is likely due to both the fact that radiosondes make up a small fraction of total data, in addition to the fact that the off-hour model "first guess" retains a lot of the information gained from radiosondes during the 00/12Z cycle due to advances in data assimilation and the models themselves. Interesting page... I'm surprised that there are quite a few times when the 18z runs, for instance, have the highest ACC of all the runs. There must be certain synoptic patterns, though, that are inherently less predictable than others and therefore are significantly more dependent on the radiosondes. I'm just not familiar enough with the literature to know of a study that shows this, but perhaps you do, Will? Link to comment Share on other sites More sharing options...
dtk Posted March 4, 2013 Share Posted March 4, 2013 Daryl (dtk) is right that radiosondes make up only a small fraction of the total observations, but it's important to remember that all these data sources have advantages and disadvantages. AMVs and radiances are available worldwide with fairly high spatial density, but it's hard to pinpoint the exact pressure level of a satellite-based wind or temperature measurement looking down through a column of atmosphere. I recall seeing a talk by the people at CIMSS that showed a considerable fraction of AMVs are off by 50+ mb or wind vectors were off by 30+ degrees radially, which is much larger than the errors we have with radiosondes. Metar, ship, bouy, etc are all great data sources, but only provide information at the surface, which only comprises a very minor fraction of the total atmosphere. Similarly, aircraft measurements tend to be confined to preferred flight routes at relatively fixed pressure levels, not a deep layer of atmosphere. Probably the biggest disadvantage of radiosondes is that their obs are spaced so few and far between, both geographically and temporally. However, Kyle - the differences in skill scores between the 06/18Z runs and the 00/12Z runs has been shown to be statistically insignificant. You can probably find the data here, less the significance testing: (http://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/). This is likely due to both the fact that radiosondes make up a small fraction of total data, in addition to the fact that the off-hour model "first guess" retains a lot of the information gained from radiosondes during the 00/12Z cycle due to advances in data assimilation and the models themselves. All valid points and I'm not at all trying to diminish the importance of radiosondes (they are one of the most critical components to our current observing system). The only other components that have similar impact are space-based microwave and hyperspectral IR sounders (sheer volume of data, high temporal and spatial frequency....but extremely difficult/complex to assimilate). Your other point about system "memory" is a good once, since the initialization process is incremental....i.e. we update a short-term [6hr] model forecast with an analysis increment based on assumed information about said forecast and the observations. As an example, if were were to assimilate no observations at 06z, the 114 hour forecast from that analysis would be identical to the 120 hour forecast from the previous 00z cycle (given the incremental update nature of the assimilation...the 6z analysis would simply be the 6hr forecast from the previous cycles). There are some low-level and practical details such as the additional late, catch-up cycle that doesn't make this exactly true, but you get the point. In terms of the 6z/18z cycle, a colleague of mine has a powerpoint slide looking at this over the past decade...... http://www.emc.ncep.noaa.gov/gmb/wx24fy/doc/GFS4cycle_fyang.pdf One can clearly see that the gap between the 6z/18z cycles from the other two has largely been removed (better/more observations, improved models, more sophisticated assimilation algorithms)......but not necessarily eliminated entirely if you are considering forecasts of the same length. However, it is pretty easy to demonstrate that there is still value gained by running these cycles due to the fact that we lose skill with increased lead time. Link to comment Share on other sites More sharing options...
kylemacr Posted March 4, 2013 Share Posted March 4, 2013 That's a great slideshow, dtk, thanks. Very interesting... Link to comment Share on other sites More sharing options...
ohleary Posted March 4, 2013 Share Posted March 4, 2013 My point is that the radiosondes provide very important observations and these special, relatively coherent, vertical profiles of the atmosphere can be critical during certain, especially unpredictable, patterns. One of the geographic areas that the sondes matter the most is the West Coast of the U.S. The Pacific is void of sondes and that can cause issues when features propagate east from the Pacific and are suddenly sampled by the sonde network. The usefulness of vertical observations is the reason that NOAA often launches dropsondes from their aircraft when high-profile storms are on their way. That said, I agree with your main point. Most of the data are the same between model runs and under many patterns, there won't be many differences with or without the sondes. However, there are many times when the sondes make a huge difference, and it's often difficult to know ahead of time when those times are. Studies over a decade ago showed that Pacific dropsondes improved model's performance, which is why the WSR program was started. But a recent study of the 2011 WSR season concluded that the differences in forecasts with/without the drops was statistically insignificant. This is thought to be partly due to the much higher number of observations over the previously "data-sparse" Pacific...along with much improved data assimilation systems and the improved models themselves. Sonde data is important, but not like it used to be. Plus, the 12Z sonde data is "in" the 18Z forecast, even though it's not directly assimilated into the model. I'm sure it'll be sooner than later when it is asked if the WSR program provides the bang for the buck...it's a pretty expensive program. Link to comment Share on other sites More sharing options...
MichaelScott Posted March 14, 2013 Share Posted March 14, 2013 I have to disagree, I look at trends especially inside 60 hours. I have to ask what makes you disagree with the paper he presented. Honestly just wondering because this is something that a lot of mets on this board seem to do (follow trends) but the paper pretty explicitly states that this is an unreliable methodology at best. They present some rather compelling evidence to support their claim. The only slight problem with the paper is that it doesn't test dprog/dt with modern models, but they explain why in a very reasonable manner. Perhaps dprog/dt has gotten better as a rule of thumb as the models have gotten better, but there's no real way to test that since these newer models are updated so regularly. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.