Jump to content
  • Member Statistics

    17,609
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

Probabilistic convective forecasts


Recommended Posts

I was talking about reports since the exact same storm over a large populated region will likely receive more storm reports than one over the plains of Kansas.

How does one accurately verify radar indicated tornadoes over open country at night? I read about looking for powerline flashes, or hoping for lightning to backlight the funnel, but if there isn't a lot of lightning and powerlines are few and far between, maybe rain-wrapped, storm surveys on radar indicated tornadoes. That seems like using a lot of personnel resources for what may or may not have touched down.

If baseball sized hail falls in the forest, and doesn't hit anyone, did it really fall?

Link to comment
Share on other sites

I don't dispute that probabilistic outlooks have utility, and part of the reason for this success is the arbitrary nature upon which verification works

The verification process is not arbitrary, though. It's spelled out and completely logical.

you are now saying size of the contoured probability matters in verification -- you did not make that claim before,

I thought this was understood, I didn't mean to mislead.

nevertheless this still does not make the SPC evaluation of probability as robust and concrete as it would appear. Which is more likely, that forecasters correctly assess potential and these claims are verified by statistics, or that forecasters get lucky more frequently than they are willing to admit and errors in both direction get washed out by the property of large numbers. I'll choose door number two.

Over a large sample? Clearly that they've correctly assessed the potential, otherwise, the numbers are the biggest coincidence I've ever seen in my life. I'm not going to call a forecaster lucky or whatever subjective label you want because the objective numbers don't jive with my preconceived notions.

I'll give you another example, this time the box is the state of Kansas in all ten events. In the first nine, no tornadoes were observed... and in the last case 80 were observed. In 10% of the outlooks at least one tornado occurred -- but does this mean that the forecaster had a firm understanding in each event, or that on the whole the forecast understands tornado potential? I would say no, despite that your conditions were met.

Ok, that's your prerogative. We can spend all day coming up with unrealistic scenarios that lead to good verification scores because every verification technique is imperfect. Nobody is claiming otherwise, this is why multiple techniques are used as discussed in the Doswell and Brooks primer. Is it possible in some situations that a high verification score masks poor skill? Sure, but in the aggregate, they're a great tool in assessing forecaster skill.

And as I stated before one can broadly asses the skill of the forecaster, but the evidence is not as concrete or robust as it is made out to be.

Is this based on anything other than your opinion?

The funny thing about statistics is that they can be fabulously elegant and deemed to be appropriate... until they're not. This problem is not endimic to meteorology, and in actually it seems to be more insulated from Gaussian mirages... but it too is not immune.

Agreed, though I see no evidence of that in the cases were discussing here (either in the Brooks climatology papers or the various works I've presented in this thread). Maybe you have.

Yes and somewhat. I again do not find the verification scheme to be as accurate as it is claimed to be. It's no different than economic models which try and indicate how risky some investment is. That's blown up in our face and shown we are not as understanding of the system as we think we are, despite the fact that some many people have become ridiculously rich.

Markets are notoriously bad at including extreme outlier-type events in their risk assessment and in their general dealing with Gaussian assumptions. I see no evidence of that in convective probability forecasts. Is there any evidence the two are similar? I'm certainly open for arguments to the contrary.

Gotcha, so you did oversimplify it. And guess what, I'd argue that the more simplistic approach was better if you want a more favorable outcome with regard to your verification schemes!

That's why I said "for example, let's say a large sample of 10% tornado point forecasts for simplicity."

Of course I wouldn't want a deterministic forecast, but I do use the probability outlooks as nothing more than the confidence that the given forecaster(s) has in a given event. I do not believe that the 10% contour actually represents 10%. But hey, nothing wrong with that.

I've read this a bunch of times and I'm not sure what you're saying. That you're willfully interpreting the forecast differently than it's intended?

What would I do differently? I think confidence intervals are just as informative and do not require the statistical investigation of probability contours.

Yeah, maybe for the higher-probability events, which suffer from low sample sizes in verification, though I'd worry how that would play with the public.

Additionally, I would admit that I could not reliably gauge the chance of a tornado within 25 miles of a given point. I'm not going to claim that I can accurately state the probability of an event when I can not precisely say how that event materializes in the first place.

Well, the numbers show pretty clearly that they can. You don't have to know the exact physical mechanisms responsible for some phenomenon if an ingredients-based, probabilistic approach can be used skillfully, which it can. If you were charged with making a deterministic tornado forecast, then yes, admission that you just can't do it skillfully would probably be necessary. But again, the whole idea behind the probabilistic forecast is the tacit admission that the phenomenon in question or the processes involved with its evolution are not well understood and/or not well observed.

Link to comment
Share on other sites

How does one accurately verify radar indicated tornadoes over open country at night? I read about looking for powerline flashes, or hoping for lightning to backlight the funnel, but if there isn't a lot of lightning and powerlines are few and far between, maybe rain-wrapped, storm surveys on radar indicated tornadoes. That seems like using a lot of personnel resources for what may or may not have touched down.

If baseball sized hail falls in the forest, and doesn't hit anyone, did it really fall?

The best bet is with damage surveys, but those are by their very nature subject to interpretation, particularly in situations with no visuals (like at night). We also know that even if damage surveys are able to determine that a tornado occurred, the EF rating may not be accurate as hoped. Another possibility is if CASA goes national, and there are networks of higher frequency radar systems with better spatial and temporal resolution looking closer to the surface, that radar-indicated tornadoes will carry more weight.

Link to comment
Share on other sites

For those that are curious, here's a figure illustrating the idea behind the warn-on forecast initiative:

Figure1_WoF_WP_1000.gif

I love this type of product as an operational met, but this doesn't tell the public jack. They already don't know what 20% chance of rain means so lets attach that to a critical warning as well? Its a solid idea but it would require WAY more public education than we have the ability to perform. Its not as bad as probabilistic QPF though.

Link to comment
Share on other sites

I love this type of product as an operational met, but this doesn't tell the public jack. They already don't know what 20% chance of rain means so lets attach that to a critical warning as well? Its a solid idea but it would require WAY more public education than we have the ability to perform. Its not as bad as probabilistic QPF though.

Well, they have 10-15 years to work on it, so let's not quit just yet.

Link to comment
Share on other sites

  • 2 weeks later...

Well, the numbers show pretty clearly that they can. You don't have to know the exact physical mechanisms responsible for some phenomenon if an ingredients-based, probabilistic approach can be used skillfully, which it can. If you were charged with making a deterministic tornado forecast, then yes, admission that you just can't do it skillfully would probably be necessary. But again, the whole idea behind the probabilistic forecast is the tacit admission that the phenomenon in question or the processes involved with its evolution are not well understood and/or not well observed.

Very well... we certainly will agree to disagree -- I see this all differently but in the end if the user (generally meteorologists and not the public at large) can gleam some use out of it I suppose it doesn't matter if the probabilities are "accurate." I will certainly keep track of the SPC contours this spring and summer and verify them myself and see what I come up with -- figured it would be the best way to gauge their ability to meet their marks (preliminary look at some older outlooks and tornado reports seem to suggest they are a little off, but maybe it was a bad month I looked at). In any event, I have no doubt that they are knowledgeable about severe weather and some of the best forecasters in the country with regard to that specialty.

Markets are notoriously bad at including extreme outlier-type events in their risk assessment and in their general dealing with Gaussian assumptions. I see no evidence of that in convective probability forecasts. Is there any evidence the two are similar? I'm certainly open for arguments to the contrary.

Actually it's quite the opposite. Over the course of the last 100 years market crashes have happened precisely because we do not firmly grasp the real risk involved in what we do. It's really not much different than sports betting... you know that Duke will beat Belmont 99 times out of 100, which is why a 10 dollar bet nets you 50 cents in profit. It's a safe bet -- until Duke loses. Better to make ten 50 cent bets and hope one works out to net you 10 dollars in profit... and I realize this example was not completely accurate with regard to real payouts, but it illustrates the point.

Our finanical markets are not driven on logic -- but luck. Think of all of the innovation over the course of this countries history, but especially in the last 100 years. No one would have foretasted the invention of the car, nuclear energy, or the internet. A few big things reap such positive benefit to cover for the massive amounts of failed endeavors, and then some. That's actually what makes the future scary for America because we're losing our innovative edge, and thus the ability to cover our collective asses.

Link to comment
Share on other sites

Actually it's quite the opposite. Over the course of the last 100 years market crashes have happened precisely because we do not firmly grasp the real risk involved in what we do. It's really not much different than sports betting... you know that Duke will beat Belmont 99 times out of 100, which is why a 10 dollar bet nets you 50 cents in profit. It's a safe bet -- until Duke loses. Better to make ten 50 cent bets and hope one works out to net you 10 dollars in profit... and I realize this example was not completely accurate with regard to real payouts, but it illustrates the point.

Our finanical markets are not driven on logic -- but luck. Think of all of the innovation over the course of this countries history, but especially in the last 100 years. No one would have foretasted the invention of the car, nuclear energy, or the internet. A few big things reap such positive benefit to cover for the massive amounts of failed endeavors, and then some. That's actually what makes the future scary for America because we're losing our innovative edge, and thus the ability to cover our collective asses.

Maybe I wasn't very clear or maybe you misread what I wrote, but I agree with you completely here.

Link to comment
Share on other sites

  • 1 year later...

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...