Jump to content
  • Member Statistics

    17,611
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Why foreccast of snowstorms sometimes bust


usedtobe

Recommended Posts

I wrote this article for the Capital Weather Gang in 2011.   I've attached the link but for those who don't get the Post, I've reproduced it below. 

https://www.washingtonpost.com/blogs/capital-weather-gang/post/why-are-snowstorm-forecasts-sometimes-so-wrong-part-one/2011/11/23/gIQA4ZfaoN_blog.html?utm_term=.a23049645d7c

 

Almost every year, at least one snow forecast ends up busting in our region. Many readers probably remember last year’s December 26 bust (when we called for 3-6” of snow, and little fell). The fallout elicited remarks like “weather forecasting is the only job where you can be wrong 90 percent of the time and still keep your job.” While that’s a huge overstatement about the state of weather forecasting, it certainly captures the frustration that many feel when a forecast fails.

A number of factors that can contribute to a poor forecast: 1) many of the physical processes that govern the atmosphere act non-linearly, 2) uncertainty about the initial state of the atmosphere, 3) certain part of a model’s physics have to be approximated, 4) there is often more than one stream of flow that has to be handled correctly by the models, 5) we live close to a huge heat and energy source (the ocean), 6) forecaster making poor decisions.

Any one of these factors can help lead to a poor forecast and a perceived bust. In the following discussion I’ll attempt to explain how these first three factors can sometimes negatively impact upon a forecast and how meteorologists try to mitigate them. Next week, I’ll tackle the last three factors in part two of this series.

The non linear nature of weather

The non-linear nature of the atmosphere comes into play in causing forecasting problems in several ways.

Forecasters cannot just extrapolate features as they come eastward expecting them to move and change in a linear manner. Weather systems cannot be anticipated to change in a steady manner.

Imagine a series of numbers as representing the development of a storm system. A linear extrapolation of the developing of such a system would be 2, 4, 6, 8, essentially a steady measure of increase in the system’s strength.

However, a non-linear change is represented by a number sequence like 2, 5, 15, 60. Weather systems can and do change and develop rapidly. These non -linear changes not only impact the strength of the system but also how it tracks. That’s why computer models are of such value. They can often anticipate the rapid changes.

Without physically-based computer models, it is doubtful that forecasters would have been able to predict the massive October storm that hit the northeast. The non-linear nature of weather is what makes it possible to get monster snowstorms but also is partly the reason why forecasting them is so difficult. Because atmospheric responses are non-linear, errors in a model can sometimes grow quickly.

Uncertain initial conditions

MIT scientist Edward Lorenz published two seminal papers in the 1960s that discussed that small differences in two models simulating the initial state of the atmosphere can grow non-linearly when projected forward and produce two diametrically opposed solutions. Steve Tracton has previously written about Lorenz’s work and its implications to forecasting.

Unfortunately, there is no way to measure atmospheric variables (the temperature, winds, moisture, etc) accurately at every point on the globe. Furthermore, atmospheric measurements from various sources (balloons, satellite, radar, ships, planes) are imperfect. So models never have a 100% accurate representation of the actual atmosphere.

The incomplete set of imperfect observations have to be brought into a model in such a way as to minimize errors that might later grow and contaminate a simulation. This quality control and assimulation process somewhat smooths the data. Therefore, the initial state of the atmosphere is always somewhat uncertain and that uncertainty can and does sometimes lead to major forecast problems.

The two forecasts below are from the exact same model with identical physics but with slightly different (probably not discernable to the naked eye) initial fields (sets of data). Note that one has a strong low (left hand panel) located north of D.C. implying a rain storm while the other has a much weaker low farther to the south suggesting the storm would either miss us to the south or would produce snow.

 

 

AMWX_uncert.png.30134ae2f0a47d447eb4e1be09e5c0ee.png

Any errors in the initial fields grow faster in some patterns than in others. That is the basis for developing ensemble forecasting systems. The National Centers for Environmental Prediction (NCEP) runs a number of simulations four times each day in which they perturb (tweak slightly) the initial conditions to try to get an idea of the probabilities associated with any storm system. The resultant array of solutions provides information that can be used to assess the probabilities of getting a snowstorm. However, even if every ensemble member is forecasting a snowstorm on day 5 projection, that is no guarantee that a snowstorm will occur. Occasionally, the actual truth lies outside of any of spread of any of the solutions.

Approximations of some physical processes

Another source of error is that certain atmospheric processes (convection, clouds, radiation, boundary layer processes, etc) are either too small to be represented in the model, not well understood, and/or too computationally expensive to simulate.

Probably the most problematic process to deal with is convection. Convection occurs on a scale too small for models to simulate and must be parameterized, a procedure for representing it on a scale that the model resolves. Parameterization requires approximations, which can lead to forecast problems.

The uncertainty of the initial conditions, possible errors introduced by the approximations of the physics and the non-linearity of the physical processes are a dynamic mix. Together they are the factors that lead to the models jumping from solution to solution leading up to a storm. In the longer ranges, the differences between solutions can be quite large. In shorter time ranges, the differences are not as large but our location near the ocean makes small differences in the track and intensity of a storm crucial to getting a snow forecast right.

The differences between two operational models, the GFS and NAM, prior to the October 29 storm are a case in point. The NAM suggested that the D.C. area would see accumulating snow while at least one run of the GFS suggested almost all the precipitation in the area would be rain. Because there is always some uncertainty about any forecast, meteorologists are evolving towards issuing probability based forecasts.

Three other factors also make snow forecasts difficult. * There is often more than one stream of flow that has to be handled correctly by the models
* The ocean to our east and mountains to the west
* Forecasters may make poor decisions

There is often more than one stream of flow that has to be handled correctly by the models

The ingredients needed to get a snowstorm are often governed by more than one stream of flow and can be impacted by what is happening both upstream and downstream of the approaching storm. Let's go back to our surface forecasts that showed one low over the Great Lakes and another forecast that had the low suppressed to our south. 

In the figures above, note the differences in the surface pressure patterns to the northeast of the storm (over western Pa. in left panel, northern Al. and Ga. in right panel).

In the right panel, the pressure gradient (change in pressure over some distance) implies that surface winds are still from the northwest (see arrow) over New England as there is a strong low located near the Canadian Maritimes. That cyclonic circulation helps to force the storm approaching the East Coast to take a southerly track.

On the left panel, the low is weaker and farther to the east allowing the winds across New England to be southeasterly (see arrow). Without the north winds over New England and the strong low over near Nova Scotia, the low approaching the East Coast has more room to come northward and develop.

 

5a6206392f630_upperairchallenges.png.77bf12cb48ef64f212fe65703f69723a.png

On the bottom panel, the ridge is located much farther west (near the West Coast) than on the top panel (over the Rockies). It also has two distinct upper level impulses that have not yet phased (merged together) and it therefore has a weaker low located farther off the coast than solution shown on the top panel. By contrast, the top panel with more eastward location of the ridge has essentially phased the two upper level disturbances producing a sharper upper level trough which produces a strong low that is tucked in closer to the coast.

In the above maps, the ECMWF forecast (the top two panels) was forecasting a major snowstorm while the GFS (bottom two panels) was predicting a near miss. The more upper level features that are in play as a potential storm approaches, the tougher it is for the models to get the forecast right.

The warm waters off the coast.

Most of our potential snow storms track across the southeast and then turn up the coast. They therefore usually tap into some of the air coming northward from near the Gulf Stream setting up a very tight thermal gradient (how quickly the temperature changes as you move across the front).

Having a strong frontal boundary along the coast is a double edged sword. If you get a favorable track there, lots of energy is available to crank up a storm. However, it also means that there is plenty of warm air nearby that can mess up a forecast with a slight deviation in the storm track.

In the image shown above right, if you shift that center of the low to the west a little, it would introduce freezing rain or rain where heavy snow actually fell. Shift the storm track a little east and there would have been no mixing problems east of I-95, where snow changed to sleet for a time.

Most major storms are associated with a very tight temperature gradient so D.C. is usually right near the rain-snow line. Any small last minute shift in the storm track can make a forecaster look really foolish.

Mountain challenges

Whereas the oceans supply warm air that can mess up a forecast, the mountains help promote cold air damming that tends to keep low level cold air across the area longer than it would last without them there. It’s often tricky to determine how long the cold air will stick around before being eroded by warmer air from the ocean that might trickle in (see above).

Cold air damming requires cold high pressure to our north. The presence of the high results in cold flow from the north at low levels. The mountains then essentially act as barrier, keeping cold air trapped to their east. Because the cold air is dense and difficult to dislodge, sometimes it can linger longer than forecast by the models leading to unexpected icing problems.

Just as the mountains can be conducive to wintry weather, they can also impede it. When flow is from the west and northwest, the mountains also lead to downsloping winds. These winds produce drying when a storm tracks just to our north, cutting off moisture, even if the temperatures are cold enough to support snow.

Meteorologists may have bias or misinterpret the data.

The most common reason meteorologists err in their forecasts is misjudging the probabilities associated with a storm especially in those tricky situations where there is no consensus among the models.

Failure to communicate probabilities

Sometimes the errors are a result of hubris in trying to make a deterministic single forecast during an iffy forecast situation. The general public wants a best guess so we try to provide that. However, if we fail to a good job describing the uncertainty of the forecast situation, we can really get burned. That certainly was the case during the infamous Decenber 26 non-storm last year.

Despite cloaking the Dec. 26 forecasts in probabilistic terms, I made the mistake of saying I was “bullish” about accumulating snow giving a false impression of the certainty of the forecast. The CWG forecasters were then slow pulling away from the snowy solution even though radar and satellite were indicating that the heavier clouds and weather to our southwest were moving more eastward than northward. For an in depth discussion of what went wrong with that forecast, click here. In that case CWG forecasters were afraid to write off the storm because of the rapid changes that sometimes occur to the precipitation shield during snowstorms (the 1987 Veteran’s day storm comes to mind).

Overreliance (and underreliance) on models

Sometimes meteorologists bust in the shortest time ranges by relying too much on models. However, those same models are powerful tools that also correctly predicted this year’s October snowstorm along the East Coast and suggested that the precipitation would linger longer than indicated by extrapolation of the radar images. It’s very doubtful that anyone would have forecast such a storm without the computer simulations.

Human psychology and emotions can also sometimes lead to forecast mistakes. For example, if a forecaster predicts that a storm will miss the area and then the area is gridlocked due to that same storm he or she is often lambasted with severe criticism from the media and general public. The next time a similar looking storm appears on the models he or she might let their recollections of the previous bust creep into their forecast.

While it’s important to learn from forecast mistakes, it’s also possible to be blinded by them. The CWG forecasters guard against that by sharing forecast ideas with other members of the team prior to issuing a forecast.

Forecaster bias (wishcasting and hero syndrome)

Most meteorologists quickly learn to reign in any bias that might result from their love or hatred of snow. However, a few suffer from the hero syndrome (no Capital Weather Gang forecaster) wanting to be first to call for a major snowstorm based on one or two model runs when the storm is still several days from actually occurring. Trumpeting such a forecast is usually a mistake that implies more skill than actually exists in the longer time ranges. The same forecasters may then sometimes be slow to back away from their deterministic forecasts.

Anytime you hear someone calling for a major snowstorm 4 or 5 days in the future, view it with lots of skepticism. The only sane way to forecast any storm is by assessing the probabilities of a storm and then conveying the probabilities to the public. That is why the Hydrometeorological Prediction Center routinely issues probability forecasts for various snowfall amounts and why the CWG team also tries to provide probabilistic forecasts of a storms potential.

Next time you get ready to wail about a bad forecast, think about all the different ways that a forecast can go wrong...

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...