Jump to content
  • Member Statistics

    17,604
    Total Members
    7,904
    Most Online
    ArlyDude
    Newest Member
    ArlyDude
    Joined

February 12-13 Storm, Part II


stormtracker

Recommended Posts

  • Replies 1.1k
  • Created
  • Last Reply

Miller a storms rarely have 15-1 ratios unless you get crazy precip reports at the airport because of the heated elements of the gauges creating mini thermals.    12-1 is more likely. 

It was Jeff and I think he  was mainly referring to onset.

Link to comment
Share on other sites

Here ya go. Most exactly the same as OP or west. Only a few (under 5) east and drier. 

 

 

attachicon.gifbwieuroens.png

This, BTW, is what leads me to believe (hope) that the American models are struggling with quality initialization data problems. That does appear to be the root of the divergence in American solution spread, and I assume it also accounts for the run to run variability in the NAM too. There's surprising consistency among the EC and ECE members, I'm assuming that's a byproduct of the 4DVAR process and, to a lesser extent, higher resolution during the model's execution. It's interesting to note that the other 4DVAR are becoming "more EC like" (NAVGEM and GGEM) with each run with the two primary NCEP models kinda wander around while drifting in the EC's direction.

It's most exciting becuase we're about to have a (relative) crap-load of CPU at our disposal. Considering it's significant handicaps the GFS does pretty damn well. "Someone's" done a darn good job of compensating for relatively poor quality of initialization, resolution and (I assume) physics as compared to the EC. You could make a decent argument that would should be even further behind the EC's verification scores. That depth and breadth of knowledges that's been gained via "trying to make the best of a bad situation (starved for CPU)" should result in some impressive benefits in the 18 - 30 month timeframe as reaources and development catch up.

Link to comment
Share on other sites

This, BTW, is what leads me to believe (hope) that the American models are struggling with quality initialization data problems. That does appear to be the root of the divergence in American solution spread, and I assume it also accounts for the run to run variability in the NAM too. There's surprising consistency among the EC and ECE members, I'm assuming that's a byproduct of the 4DVAR process and, to a lesser extent, higher resolution during the model's execution. It's interesting to note that the other 4DVAR are becoming "more EC like" (NAVGEM and GGEM) with each run with the two primary NCEP models kinda wander around while drifting in the EC's direction.

It's most exciting becuase we're about to have a (relative) crap-load of CPU at our disposal. Considering it's significant handicaps the GFS does pretty damn well. "Someone's" done a darn good job of compensating for relatively poor quality of initialization, resolution and (I assume) physics as compared to the EC. You could make a decent argument that would should be even further behind the EC's verification scores. That depth and breadth of knowledges that's been gained via "trying to make the best of a bad situation (starved for CPU)" should result in some impressive benefits in the 18 - 30 month timeframe as reaources and development catch up.

 

sweet...thanks for the explanation and looking forward to the upgrades

Link to comment
Share on other sites

This, BTW, is what leads me to believe (hope) that the American models are struggling with quality initialization data problems. That does appear to be the root of the divergence in American solution spread, and I assume it also accounts for the run to run variability in the NAM too. There's surprising consistency among the EC and ECE members, I'm assuming that's a byproduct of the 4DVAR process and, to a lesser extent, higher resolution during the model's execution. It's interesting to note that the other 4DVAR are becoming "more EC like" (NAVGEM and GGEM) with each run with the two primary NCEP models kinda wander around while drifting in the EC's direction.

It's most exciting becuase we're about to have a (relative) crap-load of CPU at our disposal. Considering it's significant handicaps the GFS does pretty damn well. "Someone's" done a darn good job of compensating for relatively poor quality of initialization, resolution and (I assume) physics as compared to the EC. You could make a decent argument that would should be even further behind the EC's verification scores. That depth and breadth of knowledges that's been gained via "trying to make the best of a bad situation (starved for CPU)" should result in some impressive benefits in the 18 - 30 month timeframe as reaources and development catch up.

 

ender - are you guys using a BlueGene machine for compute or a teracluster running TOSS?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...