Jump to content
  • Member Statistics

    17,607
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Ensemble members' verification scores


Recommended Posts

Does anybody have any verification stats or scores on individual ensemble members of each model?

Say, for example, in the winter season, P003 does better than P006 in the GFS ensembles under 96 hrs... with whatever stats and scores to support this.

Having data on this might help us interpret the ensemble suite in such a way that we can make better forecasts, particularly for EC storms.

Link to comment
Share on other sites

Does anybody have any verification stats or scores on individual ensemble members of each model?

Say, for example, in the winter season, P003 does better than P006 in the GFS ensembles under 96 hrs... with whatever stats and scores to support this.

Having data on this might help us interpret the ensemble suite in such a way that we can make better forecasts, particularly for EC storms.

Hm...I hope they're all equal (as far as statistical relevance is concerned). If one or two have an edge, that can't be explained by the level of perturbation, then I think we've got a serious problem.

Link to comment
Share on other sites

I would have to imagine (although I have nothing to back this up) that the spirit of ensembling and the method of perturbing the initial conditions to arrive at each member would prevent this sort of statistical analysis.

I was always under the impression that the "perturbing" of each member was done randomly. So, in essence, today's P001 on the 00z GFS ensemble is not the same P001 on tomorrow's GFS 00z ensemble. I would think (again, not saying for sure, just an assumption) that if the EMC would be applying a consistent perturbation to the model each day for the same member it would not really be very chaotic...and personally, I think that is the facet of the atmosphere we are really trying to grasp - the uncertainty and the valuation of chaos - with ensembling.

So, to answer your question, I would think the only "member" you would be able to perform relevant statistical analysis on would be the control but not the members.

Note - this could another hint...it's called the control for a reason...

Just my two cents and if someone from EMC or someone who has better knowledge of NWP and ensembling has other information, please feel free to correct me.

FWIW - I have always wanted to do this at my shop but we never investigated the scenario because of the assumptions described above.

Best of luck in your research.

Link to comment
Share on other sites

I believe he's correct, the peturbations are done randomly, so you'd hopefully find that all the members have the same level of verification.

Just a slight clarification....the perturbations are NOT random. If they were white noise, you wouldn't actually get that much spread in the forecasts. For a given cycle, there are actually 81 different initial conditions (1 control, 80 perturbed members)....and from this, 21 are run out as the GEFS (I have to double check if the selection process of random, or sequential, or what). I'm pretty sure that for a given cycle, that 'p01' has no correlation to 'p01' for a subsequent cycle. However, if the selection process isn't completely random, I think that 'p01' could be correlated with the 'p01' for a cycle 24 hours later.

Having said all that, based on how the perturbations are generated (even if there is some correlation/correspondence) I suspect that there would not be a single member that verified better than any other member for a large enough sample.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...