Jump to content

cae

Members
  • Posts

    1,511
  • Joined

  • Last visited

Everything posted by cae

  1. Filler post #20 so we don't have too many graphics on one page.
  2. Filler post #19 so we don't have too many graphics on one page.
  3. Filler post #18 so we don't have too many graphics on one page.
  4. Filler post #17 so we don't have too many graphics on one page.
  5. Filler post #16 so we don't have too many graphics on one page.
  6. Filler post #15 so we don't have too many graphics on one page.
  7. Filler post #14 so we don't have too many graphics on one page.
  8. Filler post #13 so we don't have too many graphics on one page.
  9. Filler post #12 so we don't have too many graphics on one page.
  10. Filler post #11 so we don't have too many graphics on one page.
  11. Filler post #10 so we don't have too many graphics on one page.
  12. Filler post #9 so we don't have too many graphics on one page.
  13. Filler post #8 so we don't have too many graphics on one page.
  14. Filler post #7 so we don't have too many graphics on one page.
  15. Filler post #6 so we don't have too many graphics on one page.
  16. Filler post #5 so we don't have too many graphics on one page.
  17. Filler post #4 so we don't have too many graphics on one page.
  18. Filler post #3 so we don't have too many graphics on one page.
  19. Filler post #2 so we don't have too many graphics on one page.
  20. Filler post so we don't have too many graphics on one page.
  21. Moved this post to the second page so we don't have too many graphics on the first page. https://www.americanwx.com/bb/topic/50849-post-event-model-discussion/?do=findComment&comment=4871375
  22. Moved this post to the second page so we don't have too many graphics on the first page. https://www.americanwx.com/bb/topic/50849-post-event-model-discussion/?do=findComment&comment=4871363
  23. A few thoughts on model performance for 02/17/18: 1. Kuchera ratios generally performed better than snow depth, especially for the Euro. The above Euro snow depth plots look horrible, but the Kuchera ratio plots were much closer to reality. For comparison, here's the Kuchera ratio plot for the Euro's last run before the storm. 2. Generally I don't put much weight in ops beyond 5 days, but the long-range performances of the Euro, GGEM, and ICON were impressive. The Euro picked up on the snow threat as soon as it entered its 240-hour window. It kept it for five runs, then lost it for four runs, then picked it up again 120 hours out and never lost it. The GGEM picked up on the storm 156 hours out and never lost it, and the ICON picked up on the storm at least 156 hours out and never lost it. I can't find images back that far, but I think the ICON got the storm as soon as it was within its 180-hour range. The GFS kept the storm suppressed for too long, first finding it 114 hours out and then losing it before finding it for good 102 hours out. I want to point out that this does not mean the GFS did poorly. It's not normal for ops to lock on to storms 6 1/2 days out. But someone who was focused on the Euro and GFS ops might have missed what the GGEM, ICON, and many ensemble members were indicating nearly a week in advance. It's hard to know which model is going to have the right idea far in advance. For the Super Bowl ice storm, it was the GFS. (If I find time I'll try to add that one to this thread.) 3. The Euro seemed to do a good job with qpf, and so did the ICON. The ICON calculates its own snow ratios, which paint a more impressive picture than the snow depth maps shown above. It consistently called for a widespread 1-3", with some relatively minor run-to-run fluctuations, and that's eactly what we got. The details of who got what were less consistent. 4. The RGEM ensemble did a nice job of picking up on the jackpot zone in central MD early.
  24. February 17, 2018. Below is the stage IV precipitation analysis (verification data) for the event. The color scale is the same as used for the model runs. Below are the 00z and 12z model runs up to the event. The Euro is top left, GFS is top right, GGEM is bottom left, and ICON is bottom right. This gif starts 12 hours before the last run. Only the last two runs are shown because there was rain on the two days leading up to the event, and weather.us doesn't have a way to distinguish between the precip totals. To get a better sense on how the models did in predicting snow, I plot the snow depth maps below. Snow depth is not a great metric because 1) Every model seems to calculate "snow depth" differently. (The Euro appears to use a more generous algorithm than the other models.) 2) There is something wrong with the snow depth maps for the GGEM on weather.us. It appears to ignore all depths below 2", which makes it look like the GGEM predicted no snow in areas where it predicted low snow depths. Unfortunately, it's the only metric of snowfall available for all four models on weather.us. The Euro is top left, GFS is top right, GGEM is bottom left, and ICON is bottom right. This gif starts 216 hours before the last run. For comparison with the above plots, here are the reported snowfall totals from LWX.
  25. I've been meaning to update this thread with some actual qpf data, and now seemed like a good time. Despite some help from @mappy and @BTRWx's Thanks Giving, I couldn't find any good qpf maps for events. So I made my own using Stage IV precipitation analysis. Here's a brief description of what that is: As far as I can tell, the stage IV data is what seems to be used for qpf verification analysis. One of the things I noticed when putting together these maps is that the Stage IV analysis doesn't always match up with the ground reports well. For example, consider the event from January 4th. On the left are the CoCoRaHS reports. On the right is the Stage IV analysis. It's hard to tell because of the different color scales, but in general the Stage IV analysis thinks more precip fell than CoCoRaHS reports. This event was widely perceived as a model bust, especially for the Euro which had an infamous final model run that was much more generous with precip west of the bay than most of us observed. The CoCoRaHS data supports our observations. But if you compare the model forecasts to the Stage IV analysis, the models don't look so bad. This suggests that one of the problems with the January 4th event is that the models actually think they did pretty well, because the data that is used to calibrate the models wasn't very accurate. The Stage IV analysis depends on radar data, and that was an event with a lot of virga. So it's possible that the stage IV analysis counted a lot of virga as precip that we never actually saw. (I think one of the reasons for the Euro's last-minute shift was that it suddently decided that more of this virga would reach the ground.) On one hand, this is concerning, becasue it will be hard to make the models much better than the data that is used to verify their predictions. On the other hand, we're talking about about 0.1" of precip here. In the grand scheme of things, it's probably not a big deal. But when it means the difference between brown grass and snow angels, it can seem that way. Precip analysis aside, I also took this opportunity to re-arrange this thread. Each event will get its own post, and the top post will contain links to all of the events. That should make it easier to manage and read. If you want to view all of the images for a single event, you may need to zoom out your browser view.
×
×
  • Create New...