Jump to content
  • Member Statistics

    17,608
    Total Members
    7,904
    Most Online
    Vesuvius
    Newest Member
    Vesuvius
    Joined

Deterministic vs. Ensemble Guidance


dtk

Recommended Posts

One case? The Euro was not so great with the last storm either. It's really no better than the GFS right now.

The last storm? You mean the midwest storm that NONE of the other guidance was even close on and the Euro did well on?

Or you mean the last EC storm....where there was really only a single run that was too wound up.

We really need to learn to use the guidance that is available to us....one model (even the best one) isn't going to be right 100% of the time. The other global models (GFS, UK, GGEM) have closed that gap quite a bit on the EC, and we need to learn to interpret that. I think we also still have a tendency to hug too much to the op/deterministic runs, when we need to learn to use/interpret ensembles (even the EC ensemble itself was throwing up red flags about the op run).

Link to comment
Share on other sites

I split this off because I am interested to hear how to use the ensemble guidance more effectively. Especially beyond 120 hrs, it is difficult to make a deterministic forecast (versus a probabilistic one) from the ensemble guidance. I'd love to make probability forecasts, but unfortunately most end users don't understand the output or the format, especially in the world I work.

I'd love to hear thegreatdr's thoughts on this too.

Link to comment
Share on other sites

The last storm? You mean the midwest storm that NONE of the other guidance was even close on and the Euro did well on?

Or you mean the last EC storm....where there was really only a single run that was too wound up.

We really need to learn to use the guidance that is available to us....one model (even the best one) isn't going to be right 100% of the time. The other global models (GFS, UK, GGEM) have closed that gap quite a bit on the EC, and we need to learn to interpret that. I think we also still have a tendency to hug too much to the op/deterministic runs, when we need to learn to use/interpret ensembles (even the EC ensemble itself was throwing up red flags about the op run).

I love this post. I'm sick of hearing how bad the EC was last week because of a single OP run.

Link to comment
Share on other sites

I'm interested in learning how to use the ensembles more effectively.

If I remember correctly, the GFS ensemble mean at one point showed a track further west of the Op for the 12/19/10 non-event. I thought it was a sign that the models would trend west with its track but they never did, and reality panned out to be a non event.

Plus, I was under the impression that the perturbations in each ensemble member were not random, but they indeed are.

So I'm starting to question what I really know about the ensembles.

Link to comment
Share on other sites

I split this off because I am interested to hear how to use the ensemble guidance more effectively. Especially beyond 120 hrs, it is difficult to make a deterministic forecast (versus a probabilistic one) from the ensemble guidance. I'd love to make probability forecasts, but unfortunately most end users don't understand the output or the format, especially in the world I work.

I'd love to hear thegreatdr's thoughts on this too.

First, I'm not actually a forecaster (I work in NWP/Data Assimilation) so I'm probably not the best person for this (in fact, I'd like to learn more what we can do on OUR end to help forecasters/end users). I myself am starting to get into ensemble-based DA....so I'm starting to work some with the ensemble folks at EMC and elsewhere. I am slowly becoming more interested in ensemble initialization and forecasting.

I think you're actually right in general about end users wanting point/specific/deterministic forecasts. Though, I lurk and read some of the comments by CWG readers at the Wash Post (Wes, Jason, Ian and others do a great job in trying to convey confidence when they're issuing snow outlooks, IMO).....and I think people can use/want this information.

To me, a few of the runs where op EC was on the far western extreme of its own ensemble guidance should be saved for educational purposes. It's not usually the mean that's even the most important element of the ensemble forecast....but the range of solutions (and how likely given solutions are). I think this is especially true for events like this that seem to be all or nothing....and have moving pieces that need to come together just right for a given evolution to pan out.

I'd love to hear others thoughts on this as well.

Link to comment
Share on other sites

I'm interested in learning how to use the ensembles more effectively.

If I remember correctly, the GFS ensemble mean at one point showed a track further west of the Op for the 12/19/10 non-event. I thought it was a sign that the models would trend west with its track but they never did, and reality panned out to be a non event.

Plus, I was under the impression that the perturbations in each ensemble member were not random, but they indeed are.

So I'm starting to question what I really know about the ensembles.

I'm curious as to why you think they're random. None of the major operational ensemble (GGEM, GFS, nor EC) use random initial perturbations. In fact, all three centers actually use different ways of perturbing the initial conditions.

As to your comment about the 12/19 event....I think this highlights why it's more important than to just look at the mean. It's important to know what the individual members are doing, how they got there, and how likely individual solutions are. A mean in of itself can be very misleading if there are a few (very deep, very different) outliers that are consistent with one another.

Link to comment
Share on other sites

To me, a few of the runs where op EC was on the far western extreme of its own ensemble guidance should be saved for educational purposes. It's not usually the mean that's even the most important element of the ensemble forecast....but the range of solutions (and how likely given solutions are). I think this is especially true for events like this that seem to be all or nothing....and have moving pieces that need to come together just right for a given evolution to pan out.

Yeah, and this is the way I utilize them as well, and maybe I am using it optimally, but I'm always willing to be educated.

The problem with my world is that it's not one of weather-interested folks. All they want to know is how will it affect their bottom line. Probabilities are not things these type of people are used to or even want to look at.

Link to comment
Share on other sites

Dtk,

Is there data available online that shows the verifications of GFS, CMC and ECMWF ensemble means vs. reality and vs. HPC guidance? That would be great to see. My guess is the GFS ensemble means are a close second to the ECMWF means.

Also, care to elaborate on how each ensemble member is perturbed (without getting into too much math)?

Link to comment
Share on other sites

Yeah, and this is the way I utilize them as well, and maybe I am using it optimally, but I'm always willing to be educated.

The problem with my world is that it's not one of weather-interested folks. All they want to know is how will it affect their bottom line. Probabilities are not things these type of people are used to or even want to look at.

Probabilities are even discouraged in the aviation world, such as TAF forecasts. PROB30, SLGT CHC, CHC, were all really popular back in the day, I think using "PROB" is not allowed in the first 9 hours of the TAF. My point is, aviation folks probably know more than the general public about the weather, yet they too do not want probabilities. I think for average Joe it gets extremely confusing when they are shown more than one option. I'm not sure if probabilities as a end-product are useful in most cases.

Link to comment
Share on other sites

Yeah, and this is the way I utilize them as well, and maybe I am using it optimally, but I'm always willing to be educated.

The problem with my world is that it's not one of weather-interested folks. All they want to know is how will it affect their bottom line. Probabilities are not things these type of people are used to or even want to look at.

In terms of utilizing, I guess the point I wanted to get across is that too many people just look at the mean of the ensemble of runs and use that as a deterministic forecast (sometimes the mean is great....sometimes it's useless).

Say you have 20 ensemble members, and 10 members say that something is going to happen at point A.....and 10 members say something is going to happen 1000 miles away at point C (some bimodal type distribution).....the mean is going to be pretty useless. However, the variance/individual members tell a very interesting story....

Link to comment
Share on other sites

First, I'm not actually a forecaster (I work in NWP/Data Assimilation) so I'm probably not the best person for this (in fact, I'd like to learn more what we can do on OUR end to help forecasters/end users). I myself am starting to get into ensemble-based DA....so I'm starting to work some with the ensemble folks at EMC and elsewhere. I am slowly becoming more interested in ensemble initialization and forecasting.

I think you're actually right in general about end users wanting point/specific/deterministic forecasts. Though, I lurk and read some of the comments by CWG readers at the Wash Post (Wes, Jason, Ian and others do a great job in trying to convey confidence when they're issuing snow outlooks, IMO).....and I think people can use/want this information.

To me, a few of the runs where op EC was on the far western extreme of its own ensemble guidance should be saved for educational purposes. It's not usually the mean that's even the most important element of the ensemble forecast....but the range of solutions (and how likely given solutions are). I think this is especially true for events like this that seem to be all or nothing....and have moving pieces that need to come together just right for a given evolution to pan out.

I'd love to hear others thoughts on this as well.

You can apply basic statistical inference to interpreting these if you just want some basic groovy intuition - as I understand it, it's basically the functional equivalent to Monte Carlo Markov Chain (MCMC) simulations (I'm sure it's vastly more complex than that but the intuition is the same - if it's not than everything else I say here should be ignored). One way to think about probabilistic forecasts (weather or anything else) is in terms of the shape of the distribution as dtk notes. Think about a bell curve, where the height of the curve is the percentage of models and the horizontal axis is the position of the SLP relative to the OP. If the bell curve is thin, than you have confidence which is obvious. You can also get some ideas from the tails of the distribution - is the curve being pulled to one end of the spectrum versus the other (this is skewness)? Is one tail fat (kurtosis)? Is the distribution normal or does it have some other functional form? You can apply that thinking to not only the SLP and the 5H positioning, but also to the strength, etc. These kinds of statistics would be easy to generate imo... I don't know enough about the statistical processes to know if each model has equal probability,but you'd want to know that as well.

Link to comment
Share on other sites

I'm a bit proponent of ensembles and we trying to convey uncertainty by using probabilities of the different scenarios that are possible. However, I've worked at HPC where you thry to convey your confidence but have to make a deterministic forecast with a deadline that doesn't allow you to see the 12Z euro until after you've released your grids so you can mention it in your discussion but can't really use it in the forecast.

I think the way to use them is to try to look at the individual members and how they cluster but don't just use the ensemble products as the solutions of the GGEM, GFS, EURO, and UKMET can all be thought of as members of a superensemble. The Euro may score better on average but it has recently shown a tendency to overphase lows. When it's only clustered with 4 members of its 50 member ensemble suite, then it's probably an outlier and at the very least, I'd shift my low towards the ensemble mean or the largest cluster of all the other members. More often than not, that will be better than going with the outlier in the longer time ranges especially since to get a monster storm you always need the various shortwaves to phase, if something goes wrong with the timing and or strength of the various features, you can end up with a much flatter solution.

The ensemble mean can really be a tool in the 3 or 4 day range when all the ensembles and operational models are in pretty strong agreement. That was the case for Feb 5th last year. By 126 hrs, the gfs only had one member suggesting dc wouldn't have a big snow.

I really like some of the stuff and papers Rich Grumm has done with ensembles and normalized anomalies. When an ensemble mean starts showing huge normalized PW anomalies and Moisture flux anomalies into the west coast you can usually figure on a major rainfall event along the coast and the lower slopes of the mountains. The only time you get huge anomalies are if all the members have the timing and strength similar. Anyway, those are some thoughts,

Link to comment
Share on other sites

Dtk,

Is there data available online that shows the verifications of GFS, CMC and ECMWF ensemble means vs. reality and vs. HPC guidance? That would be great to see. My guess is the GFS ensemble means are a close second to the ECMWF means.

Also, care to elaborate on how each ensemble member is perturbed (without getting into too much math)?

Verification figures must exist somewhere online, but I'm not sure where (the ensemble means start being their higher resolution deterministic counterparts after a few days in terms of gross stats...and the GEFS is slightly behind the ECE if I recall). I'm pretty sure that CPC routinely computes statistics (but I'll have to dig/ask around). I've only seen figures presented in branch meetings.

For the GEFS, they use a method called the Ensemble Transform Method, which is somewhat a follow on to the old bred vector method. Basically, the idea is to perturb the analysis to capture structures that will grow with time. For each member, you essentially compute a pseudo error by looking at short term forecasts from the previous cycle (so for 00z, you use the ensemble of 6 hour forecasts from 18z). These errors are then "transformed/rescaled" with a given set of constraints to be added back onto the new initial conditions. It's a bit more complicated as there is also information used from the entire ensemble. For the GEFS, there is actually an ensemble of 80 perturbations and short-term/cycled forecasts, from which 20 are chosen each cycle to be run out to 2 weeks and comprise the GEFS forecast. I've been meaning to ask my collaborators if this selection process is completely random, or sequential (i.e. members 1-20 are used at 00z, 21-40 are used at 06z, etc.).

Link to comment
Share on other sites

I'm a bit proponent of ensembles and we trying to convey uncertainty by using probabilities of the different scenarios that are possible. However, I've worked at HPC where you thry to convey your confidence but have to make a deterministic forecast with a deadline that doesn't allow you to see the 12Z euro until after you've released your grids so you can mention it in your discussion but can't really use it in the forecast.

I think the way to use them is to try to look at the individual members and how they cluster but don't just use the ensemble products as the solutions of the GGEM, GFS, EURO, and UKMET can all be thought of as members of a superensemble. The Euro may score better on average but it has recently shown a tendency to overphase lows. When it's only clustered with 4 members of its 50 member ensemble suite, then it's probably an outlier and at the very least, I'd shift my low towards the ensemble mean or the largest cluster of all the other members. More often than not, that will be better than going with the outlier in the longer time ranges especially since to get a monster storm you always need the various shortwaves to phase, if something goes wrong with the timing and or strength of the various features, you can end up with a much flatter solution.

The ensemble mean can really be a tool in the 3 or 4 day range when all the ensembles and operational models are in pretty strong agreement. That was the case for Feb 5th last year. By 126 hrs, the gfs only had one member suggesting dc wouldn't have a big snow.

I really like some of the stuff and papers Rich Grumm has done with ensembles and normalized anomalies. When an ensemble mean starts showing huge normalized PW anomalies and Moisture flux anomalies into the west coast you can usually figure on a major rainfall event along the coast and the lower slopes of the mountains. The only time you get huge anomalies are if all the members have the timing and strength similar. Anyway, those are some thoughts,

Wes, great post as always.

Link to comment
Share on other sites

I'm curious as to why you think they're random. None of the major operational ensemble (GGEM, GFS, nor EC) use random initial perturbations. In fact, all three centers actually use different ways of perturbing the initial conditions.

As to your comment about the 12/19 event....I think this highlights why it's more important than to just look at the mean. It's important to know what the individual members are doing, how they got there, and how likely individual solutions are. A mean in of itself can be very misleading if there are a few (very deep, very different) outliers that are consistent with one another.

Maybe I didn't explain clearly... I meant that the individual members aren't perturbed the same way every time from run to run. I was under the incorrect impression that they are.

Link to comment
Share on other sites

Thanks for the posts usedtobe & dtk. Before I ask more questions I should probably start reviewing some of the COMET Numerical Weather Prediction modules again.

The operational ECMWF in the medium ranges did quite well with this system in the eastern Mid-West, for what it is worth. Some of the other operational runs had a decent strength low in the upper Mid West.

Link to comment
Share on other sites

Thanks for the posts usedtobe & dtk. Before I ask more questions I should probably start reviewing some of the COMET Numerical Weather Prediction modules again.

Ditto. The past couple of weeks has shown me how little I know about modeling the atmosphere, and even the atmosphere itself. Pretty humbling.

Link to comment
Share on other sites

I really like some of the stuff and papers Rich Grumm has done with ensembles and normalized anomalies. When an ensemble mean starts showing huge normalized PW anomalies and Moisture flux anomalies into the west coast you can usually figure on a major rainfall event along the coast and the lower slopes of the mountains. The only time you get huge anomalies are if all the members have the timing and strength similar. Anyway, those are some thoughts,

Rich has done a fantastic job utilizing ensemble output with normalized anomalies. We utilize Rich's approach here daily in our operational forecasting at Louisville and have seen our forecast skill scores in the day 4-7 increase quite a bit over the last 2 years. Wes gave a general explanation of how Rich uses the ensembles with the normalized anomalies. In general, utilizing the the ensembles and anomalies, when one sees very large normalized anomalies in Precip. Water, Moisture Flux, and winds at various levels, you can expect a significant event. Typically any normalized anomaly above +- 2 SD typically precedes a significant weather event. Rich has an operational site that has all this ensemble and normalized data that runs in real-time. The link is below. Definitely worth a look.

http://eyewall.met.psu.edu/ensembles/

SDF_Wx

Link to comment
Share on other sites

This is an awesome discussion, and one that should not be lost in all the garbage elsewhere.

First, ECMWF put together an awesome discussion regarding the inner workings of a numerical model. Most of it is "beginner" oriented, but the amount and depth of information is amazing, and they delve into everything from ensemble forecasting, deterministic forecasting, data assimilation, parameterizations, perturbed analysis, probabilities, etc.

http://www.ecmwf.int...de/Preface.html

As for probabilities, one thing I find disturbing these days is a lot of forecasters issue probabilities based off the numerical guidance alone. No consideration is given to the actual dynamic processes involved, and a lot of forecasters have actually become "model-casters". The inability to properly assess the dynamic environment will result in the inability to properly issue a probabilistic forecast. The bust OTS East Coast storm a few weeks ago is one example where the operational/deterministic guidance all came on board for one series of runs (ECM/GFS/CMC/UK) when in reality the probability of the event never changed. In other words, forecasters were noting that it was a very distinct "thread-the-needle" event with three phases, two of which were low amplitude waves, which had to occur at the exact right time. This was a low probability event, and the fact that all deterministic global models had a hit didn't change the low probability of the event occurring at that forecast time.

Link to comment
Share on other sites

Probabilities are even discouraged in the aviation world, such as TAF forecasts. PROB30, SLGT CHC, CHC, were all really popular back in the day, I think using "PROB" is not allowed in the first 9 hours of the TAF. My point is, aviation folks probably know more than the general public about the weather, yet they too do not want probabilities. I think for average Joe it gets extremely confusing when they are shown more than one option. I'm not sure if probabilities as a end-product are useful in most cases.

Even in industries where actually issuing probabilities may be bad (a lot of industries rely on "yes/no" forecasts), the forecasters themselves need to be able to juggle probabilities in the forecast and then convey that in an impact based forecast with careful wording. While customers may want yes/no forecasts, there is no such thing and we need to convey confidence in some manner even if we don't issue a probability.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...