Jump to content
  • Member Statistics

    17,611
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Article on how winter weather forecasting has changed


usedtobe

Recommended Posts

I wrote this for CWG but they decided it was too long and technical so I thought I'd post it here sinc eI wasted a lot of time writing it. 

 

Forecasting winter weather has changed significantly since the 1970s.  Driven by improved weather models, data coverage that has improved exponentially due to satellites. Increased understanding of winter storms has led to conceptual models that have helped forecasters visualize the structure and dynamics of storms.  Finally, the advent of personal computers and workstations has revolutionized how forecasters utilize those same weather models.    The following article discusses some of the challenges forecasters faced when attempting to predict winter weather from a perspective of a forecaster who worked at NMC (now NCEP’S Weather Prediction Center (http://www.wpc.ncep.noaa.gov/#page=ovw)) and attempts to outline a few of the techniques and methods that were utilized in the 1970s and ‘80s to make forecasts.  

The weather models back then had much coarser vertical resolution than they have today.   The two primary forecast models, the LFM and PE models had only seven layers and the LFM had grid spacing varied between 127 and 190.5 km.  To resolve a wave you need at least 4 grid points so the smallest wavelength the LFM could resolve was over 500 nm.  The models were only run twice a day as opposed to the 4 times daily today.   The NAM/WRF model has at least 60 layers and a high resolution version of it has 4 km resolution allowing it to sometimes forecast some of those pesky smaller scale bands of the heavy precipitation that the old LFM or PE models had no hopes of forecasting.  The lack of vertical and horizontal resolution limited the how small and feature the models could predict and also hampered their handling of low level cold air. When you have only 7 layers in the vertical, there is no way to resolve small warm layers or important upper level features like jet streaks. 

Back then, PCs were not available.  Forecast fields were received either by a facsimile machine or a plotter.  4 basic forecast fields were received: a MSL surface and thickness plot; a 850mb height (around 5,000 ft.) and temperature map; a 700 height, relative humidity and vertical velocity; and a 500 mb height and vorticity (spin) plot.   Once we received the model data forecasting would commence.  Our snow forecasts were made on acetates with grease pencils and then traced to a paper copy and then transmitted by fax to other locations.   Today winter weather forecasts by meteorologists at NCEP’s Weather Prediction Center are drawn on workstations and probabilistic forecasts of various snowfall amounts are made.   In the 1970s, paper maps cluttered the walls, today the office is virtually paperless.  

Most forecasters would start the forecast process with a hand analysis of the 500 and 850 mb heights and compare them with the initial analyses from the models to try to glean whether the model initial field looked like it was handling the various waves in the atmosphere correctly. For example, if a trough looked quite a bit sharper on the hand analysis than on the model analysis, the model might end up with a stronger system then forecast especially if there was more of a 500 ridge behind that wave.  Often these differences proved fruitless in trying to help one modify a model forecast but occasionally would lead to a forecaster correctly modifying the guidance.   A forecaster would also note where relative to a low pressure system the strongest pressure falls were located to help get a feel for if the model was handling its movement correctly.  Pressure falls often pointed to the short term direction the storm would move. 

Forecasts were often grounded in not just the models but also on pattern recognition and rules of thumb.  For example, Rosenbloom’s rule avowed that model forecasts of rapidly deepening cyclones almost always were too far to the east. Therefore a forecaster had to adjust not only the forecast track but also had to realign where the model was predicting the rain-snow line and axis of heaviest snow a bit farther to the west than forecast.    There were rules for forecasting the axis of heaviest snow based on the track of the surface low (around 150 nautical miles to left of the track),  850 mb low (90 nm left of the track) and 500mb vorticity center (around 150 nm to the left of the track).  The heaviest snow typically was forecast during the period when the low was deepening most and then you damped down snowfall amounts as the surface low started to become vertical with the upper center of circulation.  Of course, if the model had the storm track wrong, your heavy snow forecast might go down in flames. 

One of the trickiest winter weather problems was and still is where to forecast the rain-snow line.  In the 1970s and early 80s,  There was no way to look at the vertical structure of the atmosphere in enough detail to parse out whether there was a warm layer located somewhere above the ground.   Forecasters relied on model forecast of the depth between two pressure levels, 1000 mb (a level near the ground) and 500 mb which is located at around 18,000 feet.  The depth between those two layers is not really 18,000 as it varies based on the average temperature in that layer between the two pressure levels and distance between them shrinks when it is cold and expands when it is warm.  For the DC area, the critical thickness value was around 540 so below that value snow was deemed more likely than rain. 

Unfortunately, in the middle of winter when low level temperatures were really cold it can snow at a 546 thickness or can rain with a thickness below 534 when the near ground temperatures are warm.    To try to further parse the probabilities of different precipitation types,  forecasters starting looking at other thickness layer (1000-850mb and 850-700) which worked better but still was much less accurate than using model sounding like we do today.  Model output statistics (MOS) (https://www.weather.gov/mdl/mos_home) also offered guidance about the most likely precipitation type and was quite good unless the model was having serious problems with a cold air damming case or arctic air mass.  Then relying on MOS was a prescription for a big wintery bust. 

The lack of enough vertical resolution also played havoc when trying to predict how far south an arctic air mass might push.   Both the LFM and NGM (a model introduced in 1987) surface and thickness forecasts busted consistently at holding arctic fronts too far north.  The 36 hour LFM forecast below is a case in point.  Note how the model erroneously has a low in central Illinois and hints that the front may have pressed southward to the Oklahoma-Texas border while on the analysis for the same time the front (annotated blue line) is located across Kentucky and the front has pressed well south of the Texas border.  Forecasters learned that the leading edge of where the lifted index gradient slackened was usually a good approximation where the front would end up.  In Oklahoma where the model was predicting rain the end result might be freezing rain, sleet or even snow.

Hist_fig_1.png

 

 

The LFM’s and GFS’s rather crude horizontal and vertical resolution also led to problems trying to resolve smaller scale features.   Cold air damming and coastal fronts were notoriously poorly forecast as the model often predicted high pressure systems to move of the coast prematurely.  Note on the figure below that the 36 hour LFM forecast lowered the pressures too much over TN and pushed the surface high off the coast quicker than was observed.  Instead of the high being off the coast, the high pressure system ended up still located over New England in an ideal location for cold air damming.  Just as important is the coastal trough that was setting up.  By 7AM January 8, a low had formed on that trough just of the North Carolina coast.  Instead of an LFM forecast that suggested that the snow might change to rain, the damming helped produce a 6 inch plus snowstorm.  I can remember using one of the rules of thumb to implement a similar correction to an LFM forecast of storm when model’s thickness and 850 temperature forecasts suggested a rainstorm and forecasting freezing rain and having it end up as a snowstorm.  On those bad damming model forecasts parsing out precipitation type was really tough

hist_fig_2.png

 

Forecasts beyond 48 hours relied heavily on the PE model and forecasts extended out to 5 days.  Correct model predictions of major storms beyond 72 hours were rare. One exception was the PE hit the February 7, 1978 Boston Blizzard on an 84 hour forecast.  When the PE forecast development of major snows along the east coast beyond the first 3 days, more often than not if the low developed it would end up considerably west of the forecast.  That led one forecaster to proclaim “all lows go to Chicago”.  Today, day 3 forecasts on average are better than day 1 forecasts from the 70s and 80s.  Forecast now routinely extend out to day 7 with today’s day 7 being better on average than the day 5 forecast from the late 80s.

The lack of resolution also played a role in the LFM’s under-prediction of snow during President’s Day storm of 1979.  The 36 hour LFM and GFS drastically underplayed the strength of the 500 mb system approaching the coast and was 22 mb too high with the storm’s central pressure off the North Carolina coast.   Both the new model on the block, the NGM, and LFM failed to forecast the 1987 Veteran’s day storm which was a rather small scale event compared to most of our big snowstorms.  The NGM which was implemented in 1987  was an improvement over the LFM in most cases but it was often even worse in forecasting how far south an arctic air mass would push and it had its own biases of tracking lows to far west and being too strong with them over land.  The model was horrid at forecasting the southward movement of arctic fronts. 

By the 1990’s workstations and PC’s became commonplace and started revolutionizing how we looked at model data.  Forecasters now could look for important upper level features that often play a role in storm development and help produce banding features and various kinds of instability that can lead to heavy snow.   The famous no surprise snow storm that was a surprise in January 2000 helped pave the way for developing ensemble products.  Post storm runs of a model with simple tweaks to their initial analysis had some members that predicted the storm to lift northward and spread snow into Washington something the operational models failed to do leading up to the storm.

Today ensemble model runs with slightly different initial conditions or tweaks to their physics are routinely available to forecasters.  Their guidance offers a more probabilistic approach to forecasting storms.  Last winter they allowed forecasters to start crowing about the potential of a major possibly crippling snowstorm almost 5 days in advance of the storm.   By early on January 19th The European ensemble forecast system gave the portions of the DC area a greater than 90% probability of having 12” of snow on the ground by 7AM (see below).  The U.S. ensembles (GEFS) were similarly bullish.  Prior to the February 5-6 blizzard of 2010 the GEFS was similarly bullish days in advance of the storm. 

 

hist_fig_3.png

Forecasters and weather enthusiasts with the click of a mouse can now pull up a model forecast sounding from the GFS or NAM anywhere on a map and look to see whether a warm layer is present or whether there is an elevated unstable layer that might product convection.    Not only that, but sophisticated precipitation type products are available that automatically assess the thermal structure at each grid point of the model to provide precipitation type guidance that mapped so any individual can look at them to see where the model thinks the rain-snow line will be located . The availability of these precipitation type products taken together forecasts from the 3 or 4 best operational models and the GEFS and European ensemble output allow forecasters to offer possible scenarios that a potential storm might follow and weight their probabilities.  The models can often provide a heads up days in advance of a snowstorm. 

The current models can now usually resolve the development of the coastal trough and forecast cold air damming though they still sometimes have problems holding onto the cold air at the surface long enough.  Winter weather forecasting has come a long way and is much more grounded in science than it was back in the 70s and 80s.   Winter weather forecasts have improved significantly, however, despite all the improvements as Capital Weather Gang readers know, busts still happen. 

 

 

Link to comment
Share on other sites

25 minutes ago, usedtobe said:

 

 

 

I wrote this for CWG but they decided it was too long and technical so I thought I'd post it here sinc eI wasted a lot of time writing it. 

 

Forecasting winter weather has changed significantly since the 1970s.  Driven by improved weather models, data coverage that has improved exponentially due to satellites. Increased understanding of winter storms has led to conceptual models that have helped forecasters visualize the structure and dynamics of storms.  Finally, the advent of personal computers and workstations has revolutionized how forecasters utilize those same weather models.    The following article discusses some of the challenges forecasters faced when attempting to predict winter weather from a perspective of a forecaster who worked at NMC (now NCEP’S Weather Prediction Center (http://www.wpc.ncep.noaa.gov/#page=ovw)) and attempts to outline a few of the techniques and methods that were utilized in the 1970s and ‘80s to make forecasts.  

 

The weather models back then had much coarser vertical resolution than they have today.   The two primary forecast models, the LFM and PE models had only seven layers and the LFM had grid spacing varied between 127 and 190.5 km.  To resolve a wave you need at least 4 grid points so the smallest wavelength the LFM could resolve was over 500 nm.  The models were only run twice a day as opposed to the 4 times daily today.   The NAM/WRF model has at least 60 layers and a high resolution version of it has 4 km resolution allowing it to sometimes forecast some of those pesky smaller scale bands of the heavy precipitation that the old LFM or PE models had no hopes of forecasting.  The lack of vertical and horizontal resolution limited the how small and feature the models could predict and also hampered their handling of low level cold air. When you have only 7 layers in the vertical, there is no way to resolve small warm layers or important upper level features like jet streaks. 

 

Back then, PCs were not available.  Forecast fields were received either by a facsimile machine or a plotter.  4 basic forecast fields were received: a MSL surface and thickness plot; a 850mb height (around 5,000 ft.) and temperature map; a 700 height, relative humidity and vertical velocity; and a 500 mb height and vorticity (spin) plot.   Once we received the model data forecasting would commence.  Our snow forecasts were made on acetates with grease pencils and then traced to a paper copy and then transmitted by fax to other locations.   Today winter weather forecasts by meteorologists at NCEP’s Weather Prediction Center are drawn on workstations and probabilistic forecasts of various snowfall amounts are made.   In the 1970s, paper maps cluttered the walls, today the office is virtually paperless.  

 

Most forecasters would start the forecast process with a hand analysis of the 500 and 850 mb heights and compare them with the initial analyses from the models to try to glean whether the model initial field looked like it was handling the various waves in the atmosphere correctly. For example, if a trough looked quite a bit sharper on the hand analysis than on the model analysis, the model might end up with a stronger system then forecast especially if there was more of a 500 ridge behind that wave.  Often these differences proved fruitless in trying to help one modify a model forecast but occasionally would lead to a forecaster correctly modifying the guidance.   A forecaster would also note where relative to a low pressure system the strongest pressure falls were located to help get a feel for if the model was handling its movement correctly.  Pressure falls often pointed to the short term direction the storm would move. 

 

Forecasts were often grounded in not just the models but also on pattern recognition and rules of thumb.  For example, Rosenbloom’s rule avowed that model forecasts of rapidly deepening cyclones almost always were too far to the east. Therefore a forecaster had to adjust not only the forecast track but also had to realign where the model was predicting the rain-snow line and axis of heaviest snow a bit farther to the west than forecast.    There were rules for forecasting the axis of heaviest snow based on the track of the surface low (around 150 nautical miles to left of the track),  850 mb low (90 nm left of the track) and 500mb vorticity center (around 150 nm to the left of the track).  The heaviest snow typically was forecast during the period when the low was deepening most and then you damped down snowfall amounts as the surface low started to become vertical with the upper center of circulation.  Of course, if the model had the storm track wrong, your heavy snow forecast might go down in flames. 

 

One of the trickiest winter weather problems was and still is where to forecast the rain-snow line.  In the 1970s and early 80s,  There was no way to look at the vertical structure of the atmosphere in enough detail to parse out whether there was a warm layer located somewhere above the ground.   Forecasters relied on model forecast of the depth between two pressure levels, 1000 mb (a level near the ground) and 500 mb which is located at around 18,000 feet.  The depth between those two layers is not really 18,000 as it varies based on the average temperature in that layer between the two pressure levels and distance between them shrinks when it is cold and expands when it is warm.  For the DC area, the critical thickness value was around 540 so below that value snow was deemed more likely than rain. 

 

Unfortunately, in the middle of winter when low level temperatures were really cold it can snow at a 546 thickness or can rain with a thickness below 534 when the near ground temperatures are warm.    To try to further parse the probabilities of different precipitation types,  forecasters starting looking at other thickness layer (1000-850mb and 850-700) which worked better but still was much less accurate than using model sounding like we do today.  Model output statistics (MOS) (https://www.weather.gov/mdl/mos_home) also offered guidance about the most likely precipitation type and was quite good unless the model was having serious problems with a cold air damming case or arctic air mass.  Then relying on MOS was a prescription for a big wintery bust. 

 

The lack of enough vertical resolution also played havoc when trying to predict how far south an arctic air mass might push.   Both the LFM and NGM (a model introduced in 1987) surface and thickness forecasts busted consistently at holding arctic fronts too far north.  The 36 hour LFM forecast below is a case in point.  Note how the model erroneously has a low in central Illinois and hints that the front may have pressed southward to the Oklahoma-Texas border while on the analysis for the same time the front (annotated blue line) is located across Kentucky and the front has pressed well south of the Texas border.  Forecasters learned that the leading edge of where the lifted index gradient slackened was usually a good approximation where the front would end up.  In Oklahoma where the model was predicting rain the end result might be freezing rain, sleet or even snow.

 

 

Hist_fig_1.png

 

 

The LFM’s and GFS’s rather crude horizontal and vertical resolution also led to problems trying to resolve smaller scale features.   Cold air damming and coastal fronts were notoriously poorly forecast as the model often predicted high pressure systems to move of the coast prematurely.  Note on the figure below that the 36 hour LFM forecast lowered the pressures too much over TN and pushed the surface high off the coast quicker than was observed.  Instead of the high being off the coast, the high pressure system ended up still located over New England in an ideal location for cold air damming.  Just as important is the coastal trough that was setting up.  By 7AM January 8, a low had formed on that trough just of the North Carolina coast.  Instead of an LFM forecast that suggested that the snow might change to rain, the damming helped produce a 6 inch plus snowstorm.  I can remember using one of the rules of thumb to implement a similar correction to an LFM forecast of storm when model’s thickness and 850 temperature forecasts suggested a rainstorm and forecasting freezing rain and having it end up as a snowstorm.  On those bad damming model forecasts parsing out precipitation type was really tough

hist_fig_2.png

 

Forecasts beyond 48 hours relied heavily on the PE model and forecasts extended out to 5 days.  Correct model predictions of major storms beyond 72 hours were rare. One exception was the PE hit the February 7, 1978 Boston Blizzard on an 84 hour forecast.  When the PE forecast development of major snows along the east coast beyond the first 3 days, more often than not if the low developed it would end up considerably west of the forecast.  That led one forecaster to proclaim “all lows go to Chicago”.  Today, day 3 forecasts on average are better than day 1 forecasts from the 70s and 80s.  Forecast now routinely extend out to day 7 with today’s day 7 being better on average than the day 5 forecast from the late 80s.

 

The lack of resolution also played a role in the LFM’s under-prediction of snow during President’s Day storm of 1979.  The 36 hour LFM and GFS drastically underplayed the strength of the 500 mb system approaching the coast and was 22 mb too high with the storm’s central pressure off the North Carolina coast.   Both the new model on the block, the NGM, and LFM failed to forecast the 1987 Veteran’s day storm which was a rather small scale event compared to most of our big snowstorms.  The NGM which was implemented in 1987  was an improvement over the LFM in most cases but it was often even worse in forecasting how far south an arctic air mass would push and it had its own biases of tracking lows to far west and being too strong with them over land.  The model was horrid at forecasting the southward movement of arctic fronts. 

 

By the 1990’s workstations and PC’s became commonplace and started revolutionizing how we looked at model data.  Forecasters now could look for important upper level features that often play a role in storm development and help produce banding features and various kinds of instability that can lead to heavy snow.   The famous no surprise snow storm that was a surprise in January 2000 helped pave the way for developing ensemble products.  Post storm runs of a model with simple tweaks to their initial analysis had some members that predicted the storm to lift northward and spread snow into Washington something the operational models failed to do leading up to the storm.

 

Today ensemble model runs with slightly different initial conditions or tweaks to their physics are routinely available to forecasters.  Their guidance offers a more probabilistic approach to forecasting storms.  Last winter they allowed forecasters to start crowing about the potential of a major possibly crippling snowstorm almost 5 days in advance of the storm.   By early on January 19th The European ensemble forecast system gave the portions of the DC area a greater than 90% probability of having 12” of snow on the ground by 7AM (see below).  The U.S. ensembles (GEFS) were similarly bullish.  Prior to the February 5-6 blizzard of 2010 the GEFS was similarly bullish days in advance of the storm. 

 

 

hist_fig_3.png

Forecasters and weather enthusiasts with the click of a mouse can now pull up a model forecast sounding from the GFS or NAM anywhere on a map and look to see whether a warm layer is present or whether there is an elevated unstable layer that might product convection.    Not only that, but sophisticated precipitation type products are available that automatically assess the thermal structure at each grid point of the model to provide precipitation type guidance that mapped so any individual can look at them to see where the model thinks the rain-snow line will be located . The availability of these precipitation type products taken together forecasts from the 3 or 4 best operational models and the GEFS and European ensemble output allow forecasters to offer possible scenarios that a potential storm might follow and weight their probabilities.  The models can often provide a heads up days in advance of a snowstorm. 

 

The current models can now usually resolve the development of the coastal trough and forecast cold air damming though they still sometimes have problems holding onto the cold air at the surface long enough.  Winter weather forecasting has come a long way and is much more grounded in science than it was back in the 70s and 80s.   Winter weather forecasts have improved significantly, however, despite all the improvements as Capital Weather Gang readers know, busts still happen. 

 

 

 

One of the most interesting posts I've ever seen posted here.

Im really interested in how weather was forecast before satellite info was available.  For instance, weather forecasts were critical in WW2, most notably the D-Day forecast.  I wonder how they made those forecasts.

Link to comment
Share on other sites

1 minute ago, WinterWxLuvr said:

One of the most interesting posts I've ever seen posted here.

Im really interested in how weather was forecast before satellite info was available.  For instance, weather forecasts were critical in WW2, most notably the D-Day forecast.  I wonder how they made those forecasts.

That's a good question since computer models were not available.  They did have ship reports and upper air data from planes flying (I think).  If you remember Louis Allen, some here might, I think he was involved in forecasting during WW2.   The U.S, forecasters were correct in predicting a break in the weather for D-Day.  I wonder if there is a book about the forecasts during WW2

Link to comment
Share on other sites

Stuff like this is what I enjoy the most about weather.  Reading how the best was made with what was available back then.  It shows just how much sleuthing forecasters did to figure out what might happen in a storm.  

I need to re-read this a few times to fully digest it.

 

What a fantastic article, Wes.  Thank you for sharing. :)

Link to comment
Share on other sites

I always imagined during pre-computer days that forecasters would call each other and compare what is going on out their windows. Like "heavy rain in Chicago? I better up my pops for tomorrow in Cleveland." Then "Oh, you talked to Chicago and upped pops tomorrow? I'll add a slight chance in NYC in 2 days and monitor trends". Lol

Link to comment
Share on other sites

I remember Louis Allen going back to his days on Channel 7.  He actually drew the weather maps during his broadcast segment, lol.  Good times !  Channel 9 was fortunate to be able to bring in Gordon Barnes when Louis died suddenly after only a couple of years at WTOP.  I recall being sad that we had lost a weather "heavyweight" on local news, but Barnes filled the void capably.

Link to comment
Share on other sites

Thanks for nice comments.  Jason has decided to edit the article and use it on CWG.    Bob, looking out the window is still a good idea and I always look to see when Andy starts getting as that usually means it will get to me fairly soon.   As a kid I was a big Louis Allen fan. We was quite an artist with his Woodles. 

Link to comment
Share on other sites

"Forecasters and weather enthusiasts with the click of a mouse can now pull up a model forecast sounding from the GFS or NAM anywhere on a map and look to see whether a warm layer is present or whether there is an elevated unstable layer that might product convection."

 

Awesome article! And this quote from above still amazes me. The amount of computing power needed to be able to pull soundings from anywhere in the country on demand must be incredible. And then of course in the case of the GFS those computations are run every 6 hours. I think weather models may be one of the most important inventions in human history. We already KNOW that computers are.

Link to comment
Share on other sites

On 2/2/2017 at 6:02 PM, usedtobe said:

 

 

 

I wrote this for CWG but they decided it was too long and technical so I thought I'd post it here sinc eI wasted a lot of time writing it. 

 

Forecasting winter weather has changed significantly since the 1970s.  Driven by improved weather models, data coverage that has improved exponentially due to satellites. Increased understanding of winter storms has led to conceptual models that have helped forecasters visualize the structure and dynamics of storms.  Finally, the advent of personal computers and workstations has revolutionized how forecasters utilize those same weather models.    The following article discusses some of the challenges forecasters faced when attempting to predict winter weather from a perspective of a forecaster who worked at NMC (now NCEP’S Weather Prediction Center (http://www.wpc.ncep.noaa.gov/#page=ovw)) and attempts to outline a few of the techniques and methods that were utilized in the 1970s and ‘80s to make forecasts.  

 

The weather models back then had much coarser vertical resolution than they have today.   The two primary forecast models, the LFM and PE models had only seven layers and the LFM had grid spacing varied between 127 and 190.5 km.  To resolve a wave you need at least 4 grid points so the smallest wavelength the LFM could resolve was over 500 nm.  The models were only run twice a day as opposed to the 4 times daily today.   The NAM/WRF model has at least 60 layers and a high resolution version of it has 4 km resolution allowing it to sometimes forecast some of those pesky smaller scale bands of the heavy precipitation that the old LFM or PE models had no hopes of forecasting.  The lack of vertical and horizontal resolution limited the how small and feature the models could predict and also hampered their handling of low level cold air. When you have only 7 layers in the vertical, there is no way to resolve small warm layers or important upper level features like jet streaks. 

 

Back then, PCs were not available.  Forecast fields were received either by a facsimile machine or a plotter.  4 basic forecast fields were received: a MSL surface and thickness plot; a 850mb height (around 5,000 ft.) and temperature map; a 700 height, relative humidity and vertical velocity; and a 500 mb height and vorticity (spin) plot.   Once we received the model data forecasting would commence.  Our snow forecasts were made on acetates with grease pencils and then traced to a paper copy and then transmitted by fax to other locations.   Today winter weather forecasts by meteorologists at NCEP’s Weather Prediction Center are drawn on workstations and probabilistic forecasts of various snowfall amounts are made.   In the 1970s, paper maps cluttered the walls, today the office is virtually paperless.  

 

Most forecasters would start the forecast process with a hand analysis of the 500 and 850 mb heights and compare them with the initial analyses from the models to try to glean whether the model initial field looked like it was handling the various waves in the atmosphere correctly. For example, if a trough looked quite a bit sharper on the hand analysis than on the model analysis, the model might end up with a stronger system then forecast especially if there was more of a 500 ridge behind that wave.  Often these differences proved fruitless in trying to help one modify a model forecast but occasionally would lead to a forecaster correctly modifying the guidance.   A forecaster would also note where relative to a low pressure system the strongest pressure falls were located to help get a feel for if the model was handling its movement correctly.  Pressure falls often pointed to the short term direction the storm would move. 

 

Forecasts were often grounded in not just the models but also on pattern recognition and rules of thumb.  For example, Rosenbloom’s rule avowed that model forecasts of rapidly deepening cyclones almost always were too far to the east. Therefore a forecaster had to adjust not only the forecast track but also had to realign where the model was predicting the rain-snow line and axis of heaviest snow a bit farther to the west than forecast.    There were rules for forecasting the axis of heaviest snow based on the track of the surface low (around 150 nautical miles to left of the track),  850 mb low (90 nm left of the track) and 500mb vorticity center (around 150 nm to the left of the track).  The heaviest snow typically was forecast during the period when the low was deepening most and then you damped down snowfall amounts as the surface low started to become vertical with the upper center of circulation.  Of course, if the model had the storm track wrong, your heavy snow forecast might go down in flames. 

 

One of the trickiest winter weather problems was and still is where to forecast the rain-snow line.  In the 1970s and early 80s,  There was no way to look at the vertical structure of the atmosphere in enough detail to parse out whether there was a warm layer located somewhere above the ground.   Forecasters relied on model forecast of the depth between two pressure levels, 1000 mb (a level near the ground) and 500 mb which is located at around 18,000 feet.  The depth between those two layers is not really 18,000 as it varies based on the average temperature in that layer between the two pressure levels and distance between them shrinks when it is cold and expands when it is warm.  For the DC area, the critical thickness value was around 540 so below that value snow was deemed more likely than rain. 

 

Unfortunately, in the middle of winter when low level temperatures were really cold it can snow at a 546 thickness or can rain with a thickness below 534 when the near ground temperatures are warm.    To try to further parse the probabilities of different precipitation types,  forecasters starting looking at other thickness layer (1000-850mb and 850-700) which worked better but still was much less accurate than using model sounding like we do today.  Model output statistics (MOS) (https://www.weather.gov/mdl/mos_home) also offered guidance about the most likely precipitation type and was quite good unless the model was having serious problems with a cold air damming case or arctic air mass.  Then relying on MOS was a prescription for a big wintery bust. 

 

The lack of enough vertical resolution also played havoc when trying to predict how far south an arctic air mass might push.   Both the LFM and NGM (a model introduced in 1987) surface and thickness forecasts busted consistently at holding arctic fronts too far north.  The 36 hour LFM forecast below is a case in point.  Note how the model erroneously has a low in central Illinois and hints that the front may have pressed southward to the Oklahoma-Texas border while on the analysis for the same time the front (annotated blue line) is located across Kentucky and the front has pressed well south of the Texas border.  Forecasters learned that the leading edge of where the lifted index gradient slackened was usually a good approximation where the front would end up.  In Oklahoma where the model was predicting rain the end result might be freezing rain, sleet or even snow.

 

 

Hist_fig_1.png

 

 

The LFM’s and GFS’s rather crude horizontal and vertical resolution also led to problems trying to resolve smaller scale features.   Cold air damming and coastal fronts were notoriously poorly forecast as the model often predicted high pressure systems to move of the coast prematurely.  Note on the figure below that the 36 hour LFM forecast lowered the pressures too much over TN and pushed the surface high off the coast quicker than was observed.  Instead of the high being off the coast, the high pressure system ended up still located over New England in an ideal location for cold air damming.  Just as important is the coastal trough that was setting up.  By 7AM January 8, a low had formed on that trough just of the North Carolina coast.  Instead of an LFM forecast that suggested that the snow might change to rain, the damming helped produce a 6 inch plus snowstorm.  I can remember using one of the rules of thumb to implement a similar correction to an LFM forecast of storm when model’s thickness and 850 temperature forecasts suggested a rainstorm and forecasting freezing rain and having it end up as a snowstorm.  On those bad damming model forecasts parsing out precipitation type was really tough

hist_fig_2.png

 

Forecasts beyond 48 hours relied heavily on the PE model and forecasts extended out to 5 days.  Correct model predictions of major storms beyond 72 hours were rare. One exception was the PE hit the February 7, 1978 Boston Blizzard on an 84 hour forecast.  When the PE forecast development of major snows along the east coast beyond the first 3 days, more often than not if the low developed it would end up considerably west of the forecast.  That led one forecaster to proclaim “all lows go to Chicago”.  Today, day 3 forecasts on average are better than day 1 forecasts from the 70s and 80s.  Forecast now routinely extend out to day 7 with today’s day 7 being better on average than the day 5 forecast from the late 80s.

 

The lack of resolution also played a role in the LFM’s under-prediction of snow during President’s Day storm of 1979.  The 36 hour LFM and GFS drastically underplayed the strength of the 500 mb system approaching the coast and was 22 mb too high with the storm’s central pressure off the North Carolina coast.   Both the new model on the block, the NGM, and LFM failed to forecast the 1987 Veteran’s day storm which was a rather small scale event compared to most of our big snowstorms.  The NGM which was implemented in 1987  was an improvement over the LFM in most cases but it was often even worse in forecasting how far south an arctic air mass would push and it had its own biases of tracking lows to far west and being too strong with them over land.  The model was horrid at forecasting the southward movement of arctic fronts. 

 

By the 1990’s workstations and PC’s became commonplace and started revolutionizing how we looked at model data.  Forecasters now could look for important upper level features that often play a role in storm development and help produce banding features and various kinds of instability that can lead to heavy snow.   The famous no surprise snow storm that was a surprise in January 2000 helped pave the way for developing ensemble products.  Post storm runs of a model with simple tweaks to their initial analysis had some members that predicted the storm to lift northward and spread snow into Washington something the operational models failed to do leading up to the storm.

 

Today ensemble model runs with slightly different initial conditions or tweaks to their physics are routinely available to forecasters.  Their guidance offers a more probabilistic approach to forecasting storms.  Last winter they allowed forecasters to start crowing about the potential of a major possibly crippling snowstorm almost 5 days in advance of the storm.   By early on January 19th The European ensemble forecast system gave the portions of the DC area a greater than 90% probability of having 12” of snow on the ground by 7AM (see below).  The U.S. ensembles (GEFS) were similarly bullish.  Prior to the February 5-6 blizzard of 2010 the GEFS was similarly bullish days in advance of the storm. 

 

 

hist_fig_3.png

Forecasters and weather enthusiasts with the click of a mouse can now pull up a model forecast sounding from the GFS or NAM anywhere on a map and look to see whether a warm layer is present or whether there is an elevated unstable layer that might product convection.    Not only that, but sophisticated precipitation type products are available that automatically assess the thermal structure at each grid point of the model to provide precipitation type guidance that mapped so any individual can look at them to see where the model thinks the rain-snow line will be located . The availability of these precipitation type products taken together forecasts from the 3 or 4 best operational models and the GEFS and European ensemble output allow forecasters to offer possible scenarios that a potential storm might follow and weight their probabilities.  The models can often provide a heads up days in advance of a snowstorm. 

 

The current models can now usually resolve the development of the coastal trough and forecast cold air damming though they still sometimes have problems holding onto the cold air at the surface long enough.  Winter weather forecasting has come a long way and is much more grounded in science than it was back in the 70s and 80s.   Winter weather forecasts have improved significantly, however, despite all the improvements as Capital Weather Gang readers know, busts still happen. 

 

 

 

I tried doing one of those too while I had the chance and was denied, but how could they try to do it to their own expert? :(

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...