Jump to content
  • Member Statistics

    17,606
    Total Members
    7,904
    Most Online
    ArlyDude
    Newest Member
    ArlyDude
    Joined

Model Resolution and Diffusion


phil882

Recommended Posts

While the general pattern is favorable for a mid-atlantic snowstorm (Southern Greenland Blocking / Slit Flow) there are a lot of s/w in play here and thats is going to significantly increase the uncertainty. With this caveat in mind, we are going to see huge short term changes in the storm track and intensity for the next 2-3 days before the solutions converge.

However, I think the GFS has an increased probability to be wrong with the current storm evolution, mainly because it won't be able to properly handle the smaller scale features that will be pivotal to the following storms evolution. This doesn't only have to do with the GFS's poorer resolution in comparison to the ECMWF. The GFS also uses the leapfrog time step algorithm. This is important, because the main fallacy with the algorithm is that unless diffusion is applied for the first few time steps to dampen the higher frequency signals (smaller scale waves such as small shortwaves), the model will amplify features too quickly and cause the model to blow up or crash. While all models do apply some form of diffusion to handle poorly resolved waves, using this diffusion in tandem with the leapfrog method also sacrifices overall accuracy.

My hunch is based on this knowledge. The GFS will tend to handle the overall pattern along with larger waves well, but its the small features (like what the ECMWF has been showing that will interact with the southern stream shortwave) that will make all the difference. If the GFS may not be able to model these waves to the same degree of accuracy as the ECMWF, then it makes more sense to believe a model that tends to have the higher accuracy.

Another thing worth mentioning is that in general, models are generally too slow with the forward propagation of waves, which is why we so often see solutions bust on the slow side, even in the short term forecast. That is another thing that should be in the back of many peoples minds when trying to forecast shortwave evolution.

Of course none of this means that we will see a huge snowstorm or even a storm at all. Its just in the overall scheme of things, the ECWMF will tend to have better resolved shortwave features in comparison to the GFS based on more than just higher resolution, and that the models as a whole tend to move waves too slowly (by a very small magnitude, but this can be amplified over time).

Link to comment
Share on other sites

Thank you. I had never heard that before and appreciate that kind of insight from a red tagger.

I do want to highlight that I'm only just learning about these things myself in a NWP class... its fascinating stuff and oftentimes makes you wonder how we have models preform as well as they do considering all of the sources of error that can creep in over time (both spatially and temporally!) I'm absolutely sure that dtk would be able to tell you far more than me (and perhaps more correctly as well! :) )

A note with the slow propagation of waves in modeling... this tends to be more significant as the wave becomes more poorly resolved. In order to properly resolve a feature you typically need at least six grid points if you are using finite differentiation. Both the GFS And the ECMWF are spectral models, so they use Fourier transforms to resolve spacial features, so this error might not be as relevant. However, the NAM does use finite differentiation, so this error would most likely crop up in its forecast over time.

Link to comment
Share on other sites

I do want to highlight that I'm only just learning about these things myself in a NWP class... its fascinating stuff and oftentimes makes you wonder how we have models preform as well as they do considering all of the sources of error that can creep in over time (both spatially and temporally!) I'm absolutely sure that dtk would be able to tell you far more than me (and perhaps more correctly as well! :-) )

A note with the slow propagation of waves in modeling... this tends to be more significant as the wave becomes more poorly resolved. In order to properly resolve a feature you typically need at least six grid points if you are using finite differentiation. Both the GFS And the ECMWF are spectral models, so they use Fourier transforms to resolve spacial features, so this error might not be as relevant. However, the NAM does use finite differentiation, so this error would most likely crop up in its forecast over time.

In this case I think that resolution/physics play a much greater role than the time differencing scheme (all models have mechanisms to clamp down on noise, "diffusion", etc.). The leap-frog scheme does have a computation mode to deal with, but that's fairly easy to mitigate/minimize (even outside of brute force damping/diffusion). The scheme that ECMWF uses in their model (some sort of Semi-Implicit?) probably has greater smoothing/diffusive effects than a leapfrog scheme (with Asselin filter).

Using spherical harmonics/waves/spectral representations can be just as problematic (if not more so) than straight up finite differencing. Not all of the computations can be done in wave space (for example, you need physically defined variables to do convection, radiation, etc.), so there are transforms that need to be done between wave/physical space within the model integration (these transforms are not exact, and have noise associated with them as well).

Link to comment
Share on other sites

In this case I think that resolution/physics play a much greater role than the time differencing scheme (all models have mechanisms to clamp down on noise, "diffusion", etc.). The leap-frog scheme does have a computation mode to deal with, but that's fairly easy to mitigate/minimize (even outside of brute force damping/diffusion). The scheme that ECMWF uses in their model (some sort of Semi-Implicit?) probably has greater smoothing/diffusive effects than a leapfrog scheme (with Asselin filter).

Using spherical harmonics/waves/spectral representations can be just as problematic (if not more so) than straight up finite differencing. Not all of the computations can be done in wave space (for example, you need physically defined variables to do convection, radiation, etc.), so there are transforms that need to be done between wave/physical space within the model integration (these transforms are not exact, and have noise associated with them as well).

Awesome... thanks for carrying the conversation beyond my limited knowledge. Why do you think that the ECMWF applies greater diffusion to their scheme compared to the leapfrog method? My understanding is that we need to have a dampening scheme to be used with leapfrog to eliminate the computational mode that introduces false waves into the solution. However it also dampens the actual waves occurring in the solution, even the well resolved waves. Is the ECMWF using a higher order scheme such as a 3rd order Runge-Kutta or Adams-Bashforth? Would this require additional dampening/diffusion schemes that would have a greater smoothing effect than the schemes employed in leapfrog alone? Sorry if I am bombarding you with a lot of questions... perhaps it would be worth answering in another thread such as this one?

Link to comment
Share on other sites

Awesome... thanks for carrying the conversation beyond my limited knowledge. Why do you think that the ECMWF applies greater diffusion to their scheme compared to the leapfrog method? My understanding is that we need to have a dampening scheme to be used with leapfrog to eliminate the computational mode that introduces false waves into the solution. However it also dampens the actual waves occurring in the solution, even the well resolved waves. Is the ECMWF using a higher order scheme such as a 3rd order Runge-Kutta or Adams-Bashforth? Would this require additional dampening/diffusion schemes that would have a greater smoothing effect than the schemes employed in leapfrog alone? Sorry if I am bombarding you with a lot of questions... perhaps it would be worth answering in another thread such as this one?

Yeah, we should probably take some of this discussion to the Met 101 threads.

Dampening schemes can help with mask computational modes, but they can also act to mask other various instabilities. "Diffusion" is generally way over-used (and abused) in NWP. It's artificial.

The EC is actually using a three time level Semi-lagrangian, semi-implicit scheme (if you want the nitty gritty details, go here: http://www.ecmwf.int/research/ifsdocs/CY37r2/IFSPart3.pdf). It doesn't require additional damping, but does have some "damping/smoothing" characteristics within it. A SL/SI scheme is much more stable than leapfrog, or some of the other schemes you mention, and allows one to get away with much larger time steps (which is one of the reasons they can run at such higher spatial resolution).

We are working on implementing a similar scheme into the GFS (well, that part has already been done, we're now experimenting/tuning/etc.), which will allow us to increase resolution significantly without much more in the way of resources.

Link to comment
Share on other sites

I do want to highlight that I'm only just learning about these things myself in a NWP class... its fascinating stuff and oftentimes makes you wonder how we have models preform as well as they do considering all of the sources of error that can creep in over time (both spatially and temporally!) I'm absolutely sure that dtk would be able to tell you far more than me (and perhaps more correctly as well! :-) )

A note with the slow propagation of waves in modeling... this tends to be more significant as the wave becomes more poorly resolved. In order to properly resolve a feature you typically need at least six grid points if you are using finite differentiation. Both the GFS And the ECMWF are spectral models, so they use Fourier transforms to resolve spacial features, so this error might not be as relevant. However, the NAM does use finite differentiation, so this error would most likely crop up in its forecast over time.

Yes, the NAM is famous for being too slow w.r.t. shortwave propagation. It was especially bad when it first came out in 2006, but it still displays this behaviour, especially with low amplitude, short wavelength waves typically found along the polar front across southern CA and the northern tier of states. More importantly, also shows up with feedback scenarios where even slight timing differences can yield significant results in terms of how a model amplifies intense cyclones (i.e., rapid feedback cyclogenesis). I was unaware this problem may potentially by related to the use of finite differentiation.

Link to comment
Share on other sites

...

Thanks Adam! This makes it a lot easier to follow the conversation.

Yeah, we should probably take some of this discussion to the Met 101 threads.

Dampening schemes can help with mask computational modes, but they can also act to mask other various instabilities. "Diffusion" is generally way over-used (and abused) in NWP. It's artificial.

The EC is actually using a three time level Semi-lagrangian, semi-implicit scheme (if you want the nitty gritty details, go here: http://www.ecmwf.int/research/ifsdocs/CY37r2/IFSPart3.pdf). It doesn't require additional damping, but does have some "damping/smoothing" characteristics within it. A SL/SI scheme is much more stable than leapfrog, or some of the other schemes you mention, and allows one to get away with much larger time steps (which is one of the reasons they can run at such higher spatial resolution).

We are working on implementing a similar scheme into the GFS (well, that part has already been done, we're now experimenting/tuning/etc.), which will allow us to increase resolution significantly without much more in the way of resources.

Great information dtk! I'll dig into the pdf for more details of the time stepping in the ECMWF. What I am really curious about is what makes the SL/SI scheme more stable? Is it mainly because its using a 3rd order time-stepping scheme?

The latter is really great news since I know the biggest problem right now is the lack of money for loaning a newer/faster supercomputer like the ECMWF has. This might be a way to work around those woes... although the ECMWF still has a faster supercomputer than the US government, so that still makes me skeptical that we will be able to reach their level in resolution without some additional sacrifices like longer model running time and fewer runs.

Link to comment
Share on other sites

Thanks Adam! This makes it a lot easier to follow the conversation.

Great information dtk! I'll dig into the pdf for more details of the time stepping in the ECMWF. What I am really curious about is what makes the SL/SI scheme more stable? Is it mainly because its using a 3rd order time-stepping scheme?

The latter is really great news since I know the biggest problem right now is the lack of money for loaning a newer/faster supercomputer like the ECMWF has. This might be a way to work around those woes... although the ECMWF still has a faster supercomputer than the US government, so that still makes me skeptical that we will be able to reach their level in resolution without some additional sacrifices like longer model running time and fewer runs.

First, I'm not actually a modeller (though I have developed simple models on my own), so take what I have to say with a grain of salt. I took a NWP course at UW 10 years ago, and the rest I've had to learn from colleagues or by playing with toy models.

The increased stability in the SL-SI scheme comes from the implicit part. Have you gotten into explicit versus implicit numerical methods....for implicit, perhaps something like Crank-Nicholson? Implicit schemes have absolute stability, by solving equations both for the current state and future state for a given system. Typically, this involves some type of iterative algorithm (which can be expensive). For NWP, the cost is pretty easily negated by having the ability to be able to sue a much larger time step. Now, a semi-implicit scheme not fully implicit (hence the name), so while it has enhanced stability (relative to explicit time schemes like the ones previously mentioned...I don't think it exhibits absolute stability.

Link to comment
Share on other sites

The latter is really great news since I know the biggest problem right now is the lack of money for loaning a newer/faster supercomputer like the ECMWF has. This might be a way to work around those woes... although the ECMWF still has a faster supercomputer than the US government, so that still makes me skeptical that we will be able to reach their level in resolution without some additional sacrifices like longer model running time and fewer runs.

Actually their current computer is only one generation ahead of ours (so yes, it's a faster, but not leaps and bounds above ours). The two biggest issues for us are delivery times (we really have to do a lot in a very short time) and the fact that we do so much other than global NWP (regional NWP, rapid refresh/ruc, oceans, waves, air quality, regional and global ensembles, seasonal/climate, and now we're dabbling in space weather). Some of the other operational shops have some of these, but I don't think any of them come close to delivering the sheer volume of products that we do.

Our current supercomputer is almost at the end of its life (i.e. the contract is almost up and its being used beyond its design specs). We are scheduled to transition to a new machine sometime in 2013.

Link to comment
Share on other sites

First, I'm not actually a modeller (though I have developed simple models on my own), so take what I have to say with a grain of salt. I took a NWP course at UW 10 years ago, and the rest I've had to learn from colleagues or by playing with toy models.

The increased stability in the SL-SI scheme comes from the implicit part. Have you gotten into explicit versus implicit numerical methods....for implicit, perhaps something like Crank-Nicholson? Implicit schemes have absolute stability, by solving equations both for the current state and future state for a given system. Typically, this involves some type of iterative algorithm (which can be expensive). For NWP, the cost is pretty easily negated by having the ability to be able to sue a much larger time step. Now, a semi-implicit scheme not fully implicit (hence the name), so while it has enhanced stability (relative to explicit time schemes like the ones previously mentioned...I don't think it exhibits absolute stability.

Haha interesting... I'm learning NWP from a professor (Ryan Torn) who earned his PhD. at UW. Honestly, we aren't very far along. While we have not discussed Crank-Nicholson, we have spend a significant amount of time on Runge-Kutta Methods like Huen, Matsuno and other two stage methods for time stepping algorithms and beyond. We haven't even started talking about the nitty gritty details of spectral models which will get more into Fourier transforms. I'm very much a beginner in this area of meteorology, so perhaps it was unwise from my side to start spewing information about how models (the GFS in particular) could be affected by inferior time-stepping functions such as leap-frog. Thanks for the information again!

Actually their current computer is only one generation ahead of ours (so yes, it's a faster, but not leaps and bounds above ours). The two biggest issues for us are delivery times (we really have to do a lot in a very short time) and the fact that we do so much other than global NWP (regional NWP, rapid refresh/ruc, oceans, waves, air quality, regional and global ensembles, seasonal/climate, and now we're dabbling in space weather). Some of the other operational shops have some of these, but I don't think any of them come close to delivering the sheer volume of products that we do.

Our current supercomputer is almost at the end of its life (i.e. the contract is almost up and its being used beyond its design specs). We are scheduled to transition to a new machine sometime in 2013.

Yea exactly. NCEP is running so much more than just the GFS and its suite of GEFS. The folks up at ECMWF have it easy just worrying about their suite of products, rather than the additional computational time needed to run NAM/RUC/WRF/SREF and many of the other products you have been alluding to. However, you would think with all of the products that NCEP has to compute that we would have a faster supercomputer than those folks over in Europe, but that gets more into the bureaucracy of it all :)

Link to comment
Share on other sites

Haha interesting... I'm learning NWP from a professor (Ryan Torn) who earned his PhD. at UW. Honestly, we aren't very far along. While we have not discussed Crank-Nicholson, we have spend a significant amount of time on Runge-Kutta Methods like Huen, Matsuno and other two stage methods for time stepping algorithms and beyond. We haven't even started talking about the nitty gritty details of spectral models which will get more into Fourier transforms. I'm very much a beginner in this area of meteorology, so perhaps it was unwise from my side to start spewing information about how models (the GFS in particular) could be affected by inferior time-stepping functions such as leap-frog. Thanks for the information again!

Wrong UW (I graduated from Wisconsin and his PhD is from Washington). He was actually an undergraduate at Wisconsin when I was a MS student there. He's a pretty smart dude. I'm actually (loosely, for now) involved in a broad hurricane prediction project with him (and a whole host of others). Once I finish my PhD at Maryland in May I plan to start working on the direct assimilation of storm position/intensity in our var/ens hybrid scheme (something he'll likely be involved in). This just reminded me about an email I think I forgot to reply to a while back....

Yea exactly. NCEP is running so much more than just the GFS and its suite of GEFS. The folks up at ECMWF have it easy just worrying about their suite of products, rather than the additional computational time needed to run NAM/RUC/WRF/SREF and many of the other products you have been alluding to. However, you would think with all of the products that NCEP has to compute that we would have a faster supercomputer than those folks over in Europe, but that gets more into the bureaucracy of it all :)

Just so I don't get myself in trouble I better not comment on this.

Link to comment
Share on other sites

Haha interesting... I'm learning NWP from a professor (Ryan Torn) who earned his PhD. at UW. Honestly, we aren't very far along. While we have not discussed Crank-Nicholson, we have spend a significant amount of time on Runge-Kutta Methods like Huen, Matsuno and other two stage methods for time stepping algorithms and beyond. We haven't even started talking about the nitty gritty details of spectral models which will get more into Fourier transforms. I'm very much a beginner in this area of meteorology, so perhaps it was unwise from my side to start spewing information about how models (the GFS in particular) could be affected by inferior time-stepping functions such as leap-frog. Thanks for the information again!

I meant to add, you are right to point out that a leapfrog scheme is inferior to some of the other more sophisticated techniques...but they come at a cost. For small models, this isn't an issue. For large NWP problems, every millisecond worth of computation matters (especially for us and our aggressive delivery schedule). You can "spew" any information you want (including about the GFS, which has many imperfections)....I just think in the instance for which this was brought up, there are factors that are much more important than the time-stepping choice.

Link to comment
Share on other sites

  • 1 year later...

I do want to highlight that I'm only just learning about these things myself in a NWP class... its fascinating stuff and oftentimes makes you wonder how we have models preform as well as they do considering all of the sources of error that can creep in over time (both spatially and temporally!) I'm absolutely sure that dtk would be able to tell you far more than me (and perhaps more correctly as well! smile.png )

A note with the slow propagation of waves in modeling... this tends to be more significant as the wave becomes more poorly resolved. In order to properly resolve a feature you typically need at least six grid points if you are using finite differentiation. Both the GFS And the ECMWF are spectral models, so they use Fourier transforms to resolve spacial features, so this error might not be as relevant. However, the NAM does use finite differentiation, so this error would most likely crop up in its forecast over time.

 

Spectral methods do introduce some error through the discrete sampling of waves in order to calculate the spectral coefficients. In the case of the Fourier system, the Fourier series coefficients are calculated using the discrete Fourier transform, which approximates the integral used to calculate the coefficients in the continuous case. This leads to an aliasing error in which higher wavenumber modes may not be resolved properly. Once the full features of the waves are represented (by using a greater sampling frequency), this error converges extremely quickly; far more rapidly than any finite difference scheme.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...