Jump to content
  • Member Statistics

    17,581
    Total Members
    7,904
    Most Online
    Mdnghtrdr76
    Newest Member
    Mdnghtrdr76
    Joined

Winter model performance discussion


cae
 Share

Recommended Posts

I'm going to try to keep adding to this thread, but I don't know how much time I'll have for it this year.  I was out of town for the November storm and didn't even track that one, so unfortunately that won't be included here.  I did track the storm earlier in December though.  I've had these maps for a while (I like to go through this stuff after storms) but didn't have time to write up a post until now.  I'm going to try to get it out today, before the deluge of events in January makes us forget all about this one. 

I'm going to try to do this a little differently, where each model gets its own post.  We'll start with the globals. 

 

  • Like 1
Link to comment
Share on other sites

Euro

The first plot below is the precip verification for this event, using the weather.us color scheme.  The second is a GIF of the Euro runs leading up to the event. 

DD7knyP.png

XvBTav9.gif

Up to the end, the Euro was too suppressed.  We'll see that was common among globals with this storm.

 

  • Like 1
Link to comment
Share on other sites

GGEM

The GGEM had a good storm.  It had one notable run 4-5 days out where it predicted a phase that never happened, but otherwise it generally had a better handle on the track than any of the other globals.  It also had some problems with the sharpness of the northern cutoff.  I wonder if that might be related to its relatively low resolution - the HRDPS did much better (see below).

DD7knyP.png

CtiuzaX.gif

This was another case where the local performance seemed to be reflected in North American verification scores.  The GGEM's H5 verification scores for North America from three days out were the best of any global, which is likely why it did relatively well with the storm track.  But of course that doesn't mean it has become a better model than the others or that we should give it more weight than usual for the next storm.  There are day-to-day fluctuations in model performance, and this storm happened to hit when the fluctuations briefly put the GGEM on top.  You can see this in the below chart.  If you look at the CMC-GFS plot, you can see the area of green around the 8th and 9th of December.  That was when this storm hit.  But shortly after the chart switches back to mostly red, as the GGEM mostly underperformed the (old) GFS for a couple of weeks.  Recently it has been more of a mix of green and red, which is fairly typical as we get into mid to late winter.

GZAu1eu.png

 

  • Like 1
Link to comment
Share on other sites

FV3

Weather.us doesn't have FV3 maps, so we're switching to TT.  The first map below is the same as the precip analysis maps shown above, but using TT colors.  After that is the FV3 prediction. 

The FV3 clearly did better than the old GFS, but that was a low bar to clear this storm as the old GFS arguably did the worst among the globals.  Overall its performance was comparable to most of the other globals, keeping the storm suppressed and parts of central and southern VA too dry to the end.

Ep7ZkzL.png

r2mtSQU.gif

Link to comment
Share on other sites

Now for the regional models.  We'll start with the NAMs.

12k NAM

Not a bad storm for the 12k NAM, especially if you consider only the runs within about 48 hours or so.  This was a storm where the mesos clearly outperformed the globals.

Ep7ZkzL.png

W47Gygy.gif

Link to comment
Share on other sites

RGEM

Overall the RGEM did well, which is not surprising considering how closely coupled it is to the GGEM.  It didn't have a sufficiently sharp cutoff on the northern edge, which made it the weenie model of choice around here.  I think it was basically letting too much virga reach the ground.  I've seen it do that before.

Ep7ZkzL.png

E1xbCzd.gif

 

Link to comment
Share on other sites

HRDPS

The high-res RGEM was arguably the best model for this storm, although it had the cutoff a little too sharp and qpf too high near the central NC / VA border.  But with the exception of one small wobble, it was locked in for every run, and not to far off from what happened.  I suspect the CMC has been putting more resources into this model relative to the RGEM, as it is slated to be the RGEM replacement.  This was a storm where I think it clearly did better.

Ep7ZkzL.png

cFMt4S5.gif

Link to comment
Share on other sites

Final thoughts

The Canadian suite had a good storm, probably because for some reason the GGEM had a better handle on the H5 evolution this time around.  The HRDPS was particularly impressive.  As expected, there was a north trend among models at the end, but notably there was not one for the GGEM - it basically wobbled about the final solution.  I'm not sure why that is.  Although we've come to expect a north trend, there's no physical reason for it to happen.  It's purely modeling error, and as such eventually it should go away.  I'm not sure what it is about the GGEM that prevented it from oversuppressing this system, but it will be interesting to see if it performs similarly in similar setups down the line.

The FV3 did respectably, but not great.  It was clearly a step up from the old GFS though.  A better test might be to see how well it does against the GFS when the GFS puts up more of a fight.

  • Like 3
Link to comment
Share on other sites

  • 2 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...