Jump to content
  • Member Statistics

    17,607
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Currently monitoring guidance for March late 3rd through the 4th for the next ( beyond the 28th) significant event


Typhoon Tip
 Share

Recommended Posts

2 minutes ago, Ed, snow and hurricane fan said:

For the Texas tornado threat/thread, I am trying to match 500 mb heights in Texas between models and HRRR/RAP, which I assume which be updated, but TT and COD don't show 500 mb for those models.  Anyone have a comparison on which model initialized best and is best now?  It would affect your snow.  Quincy occasionally posts in severe threads, other than that, they are met free.

Did you compare on Pivotal?

Link to comment
Share on other sites

3 minutes ago, TalcottWx said:

I just... Have no comment. I do not even know why I am tracking this. LOL

Can’t you just fake  food poisoning Friday and claim you have to see a witch doctor on top of mount wachusett to fix it or something or always save your old positive Covid tests , they are gold

  • Haha 2
Link to comment
Share on other sites

9 minutes ago, HoarfrostHubb said:

Did you compare on Pivotal?

No.  I had WxBell until they tripled their rates.  Joe Bastardi went completely political, he was once worth reading, back around 2005.  He wasn't worth paying for, even with good models, in 2017.

 

Just checked, free models aren't bad there.  I had assumed they were PPV only.  15Z HRRR 500 mb low initialized was awfully close in location and windfields to both GFS and NAM 3 hr from 12Z, so it doesn't seem initialization is why they are different later.

I miss the HPC model diagnostic discussions.

  • Like 1
Link to comment
Share on other sites

16 minutes ago, TalcottWx said:

It has become much better in the past few days. I would be more likely to trust HRRR vs NAM lately. And no, not because it's snowier. I just have a deep hate for the NAM.

Likewise. Too many times it has burned us. I do agree with Kevin that sometimes it sniffs out a warm layer, but I'm not sure about that with this type of system. Whatever happens, we need that secondary to close quick otherwise I could see it being closer to correct.

Link to comment
Share on other sites

25 minutes ago, UnitedWx said:

Likewise. Too many times it has burned us. I do agree with Kevin that sometimes it sniffs out a warm layer, but I'm not sure about that with this type of system. Whatever happens, we need that secondary to close quick otherwise I could see it being closer to correct.

I can help solve part of that mystery -

the closing/more proficiency therein ..wrt to the 850 and 700 ( in particular) sigma level vortexes ends that penetration.  

And you have to be careful with interpretation of the closing surface at that level.  Standard met is 6 dm per isohypses, but ... you have to really use the wind flags to get a sense of the closing.. The 700 mb may have an open dishpan trough structure, but if the wind flags begin pointing from east within - the instant the flow backs lke that at that level, warm intrusion cuts off very quickly and the overrunning turns into a TROWAL and/or bent back structure(s)... usually within the snow growth region too - which is why right as these mid level centers are closing ...if they close prior to rising above growth temperatures... that's when you get that max  snow falling rates

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

32 minutes ago, weatherwiz said:

Guidance has been dreadful with temperatures pretty much throughout the country for weeks. I'm sure a large part of it has to do with the pattern but I mean they've been terrible. MOS, NBM, etc. I don't know much (well anything) about model physics, but I wonder how much of an impact something like this has on the models. 

In my quick opinion, the biggest differences spawn from:

1) Data assimilation methodology - better ic/bcs lead to more accurate forecasts. You really need to initialize the atmosphere as realistically as possible. I'm not only referring to met. forcing data (heat and moisture fields at the surface, beneath the surface, and aloft), but static data as well... Such as elevation, land cover, snow depth, ice cover, vegetation type, urban canopy, etc...

This stage of initialization is different pending the agency.

2) Horizontal and vertical configuration of a modeling system (resolution). Finer resolution leads to better results short-term, but worsen, spatially, wrt time. Courser models don't perform well initially, but spatial error vs. time isn't as significant. Unfortunately, courser models struggle with fine-scale phenomenon such as moisture flux which significantly contributes to the development/decay of a disturbance.

3) Microphysics, cumulus, etc... parameterization options followed by other physics/dynamics options. Pending the resolution of a modeling system, these options can lead to sizable differences.

There's an infinite amount of options an agency can create a modeling system with. All of these differences leads to incremental error (beneath the surface, at the surface, aloft, and between grid-cells in all dimensions - dxdydz) that develops into large scale discrepancies vs. time.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, MegaMike said:

In my quick opinion, the biggest differences spawn from:

1) Data assimilation methodology - better ic/bcs lead to more accurate forecasts. You really need to initialize the atmosphere as realistically as possible. I'm not only referring to met. forcing data (heat and moisture fields at the surface, beneath the surface, and aloft), but static data as well... Such as elevation, land cover, snow depth, ice cover, vegetation type, urban canopy, etc...

This stage of initialization is different pending the agency.

2) Horizontal and vertical configuration of a modeling system (resolution). Finer resolution leads to better results short-term, but worsen, spatially, wrt time. Courser models don't perform well initially, but spatial error vs. time isn't as significant. Unfortunately, courser models struggle with fine-scale phenomenon such as moisture flux which significantly contributes to the development/decay of a disturbance.

3) Microphysics, cumulus, etc... parameterization options followed by other physics/dynamics options. Pending the resolution of a modeling system, these options can lead to sizable differences.

There's an infinite amount of options an agency can create a modeling system with. All of these differences leads to incremental error (beneath the surface, at the surface, aloft, and between grid-cells in all dimensions - dxdydz) that develops into large scale discrepancies vs. time.

Great post and information!!! 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

1 minute ago, TauntonBlizzard2013 said:

It was a troll post 

Come on guys you see me on here all the time. I'm not a troll post. I was just being funny. I'm just trying to break the ice here as everybody's on their toes. Waiting to see what happens. Just a joke, that's all lol

  • Like 2
  • Weenie 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...