Jump to content
  • Member Statistics

    17,608
    Total Members
    7,904
    Most Online
    NH8550
    Newest Member
    NH8550
    Joined

2013 Mid-Atlantic Severe General Discussion


Kmlwx

Recommended Posts

  • Replies 1.2k
  • Created
  • Last Reply

I love how people still don't realize TS David '79 went through the area.... or that before one starts to analyze the 500 mb pattern during a hurricane season outbreak, ond might want to look at whether there was a tropical cyclone passing nearby triggering the entire event. 

 

Of course we know how the Ivan outbreak worked out- just a reminder that some of the worse area-wide events have been a direct result of tropical cyclones. 

Link to comment
Share on other sites

I can't have Ian have all the fun.. I'm Bustin' out the GIS: This was my Fall 2012, Quanitative and Spatial Analysis Final Project. I recived a 93% on the project and the points taken off were mostly due to the professor not understanding some of the lingo/ why it's important. If you have any questions just ask...

There's ALOT of statistics here, but the text should clear things up.. If your a Pictures and TLDR person, just read the Thesis and Thesis Conclusion.

 

Please ask permission before re-distributing and always give credit!

Quantitative & Spatial Analysis
Geog. 292 , Millersville University
wxmeddler
12/7/2012


Major Project
Purpose/Definitions:
The purpose of this project is to validate hypothesized theories using statistical and geographical means relating to severe weather warning verification, specifically in the National Weather Service Sterling (LWX) County Warning Area (CWA). The definition of verification for the purposes of this project is if a storm report was submitted inside the warning polygon and within the warning issuance and expiration time.
The National Weather Service defines a Severe Thunderstorm warning as:
“when either a severe thunderstorm is indicated by the WSR-88D radar or a spotter reports a thunderstorm producing hail 3/4 inch or larger in diameter and/or winds equal or exceed 58 miles an hour.” (NWS Glossary)
Thesis:
To analyze spatial and quantitative patterns to answer questions about the relationship between severe thunderstorm weather warnings and severe thunderstorm weather reports in the Sterling CWA. To answer if radar improvements have increased the success rate between warning and severe weather reporting, and whether factors such as population centers and distance from the radar site impact verification rates.

Methods:
The warning and severe weather report datasets were obtained from the Iowa State Department of Agronomy. The datasets downloaded included every severe weather warning since 1980 and every severe weather report since 2003. To narrow down the dataset, only warnings using the National Weather Service’s Polygon based warnings were used (Post ~03/2006). The severe weather reports were also trimmed to meet this criteria. Both datasets were further reduced to only take account for severe weather warnings, excluding Tornadoes and Flash Flooding (ie, no reports or warnings concerning winter storms, hurricanes, flooding or tornadoes).
For the purposes of creating a dataset that included warning verification the storm reports and warning data had to be merged. For the purposes of identifying spatial patterns, centroids were used to symbolize the warnings on the map. In total 5,127 storm reports and 2063 warnings were merged. Merging was done via a mid string query for the date/time field and converted into an continuous format given as days since 1/1/1900 which was completed using Microsoft Excel. The dataset was then put back into ArcGIS where python was used in conjunction with a spatial join based on location. This gave a field of every polygon and how many severe weather reports verified within the time and space of that polygon. Another field was created via an export to excel then imported back into the database, this ran an IF statement saying that if the number of verified reports is greater than zero then TRUE, if not, then FALSE. For the purposes of quantitative and spatial analysis purposes, the True/False field was translated into either a zero or one to signify verified vs. non-verified.

The dataset was then split into 3 separate time periods signifying each radar upgrade period. From the beginning of the dataset (~03/2006) to July 15, 2008 was the time of the original non-modified radar. This period is noted in the dataset(s) as “beforeSR” or “Before Super-Res”. From July 15, 2008 to February 27, 2012 was the time of the Super-Resolution (SR) Doppler Radar, this increased the spatial resolution of the radar to pick out finer details in storms. This was particularly helpful in strong wind event where smaller wind vortices were now being detected. In the dataset this time period is denoted as “BeforeDP / After Super-Res/SR) or something to that extent. Post February 27, 2012 is the time period of the Dual-Polarization radar. This increased the capability of the radar to determine the size and shape of what is falling. This is useful in identifying hail from heavy rain. This time period is denoted as “After Dual-Pol” or “After DP” or something to that extent in the dataset.

All Storm Reports:
post-741-0-65825700-1362026509_thumb.jpg

All Warning Centroids:
post-741-0-96222500-1362026508_thumb.jpg

Spatial Analysis:
After dividing via time the list of all the warning joins in ArcGIS the separate time period tables were exported and put into GeoStats. GeoStats was then used to calculate the mean, mean, weighed variables and standard deviations. The different time periods were then conglomerated together in one excel spread sheet and the latitude and longitude were converted back into feet from miles since issues arose in calculating such large numbers in GeoStats with the coordinate system. The Spatial coordinate system for all datasets was NAD_1983_HARN_StatePlane_Maryland. The resultant table is listed in the tables section, those used in the map are highlighted:
Weighted Centers:
The map below shows each time periods weighted verified mean location (colored dots) as well as their standard distances (corresponding colored circles). Also shown on the map is the unweighted mean across all polygons (blue star). The weighted verified mean location was taken from the true/false field, so that if the mean was in a certain direction away from the unweighted mean, then there would be a higher verification rate of warnings for that time period in that direction. The results show that the dots are fairly tightly clustered around the unweighted mean. This means that verification rates are fairly uniform in all directions. Interestingly, the weighted time period closest to the unweighted mean is the After Super-Res time period, which also had the most data points attributed to it. This means that most likely if there was more data for the other two time periods, they would gravitate to the unweighted mean also, thus reinforcing the idea that the verification is directionally distributed equally. The standard distances also are remarkably similar in size, showing that spatial distributions of the verified polygons are all relatively equal.

 

post-741-0-20727000-1362026510_thumb.jpg


Quadrat Analysis :
In the Quadrat analysis I explored the spatial variance in the data using Variance Mean Ratio (VMR) test for the true/false verification data to see whether the verification areas are random, clustered, or dispersed. In the map below I have placed every warning’s centroids and colored it based on whether that individual warning verified or not. The shaded blocks represent the verification rate in that particular cell. The cells were made by running a fishnet tool and creating a grid then spatially joining the warnings to the grid and taking the average for each grid cell. In total there are 2036 centroids and 816 grid squares.

 

post-741-0-88889800-1362026510_thumb.jpg

Quadrat Results:
All warnings, including multiple reports per warning
VMR= 0.560276 Variance 1.415795
Mean 2.526961

Verified Warnings T/F
VMR= 0.209629 Variance 0.529724
Mean 2.526961

The statistic we are looking at for some meaning to the map above is the VMR. A VMR value of one represents a highly clustered spatial dispersion while a VMR value of zero represents a random distribution. Looking at the results of the VMR looking across all warnings, including multiple reports per warning, we see that the spatial pattern had moderate variance, in other words, the pattern is random. This makes sense because the Sterling CWA is composed of about half urban and half rural land so a ~.50 VMR is about right. If we take the quantitative data out of the warnings and make it purely subjective (ie. True/False) we see that warnings verify at a pattern between a perfect dispersed pattern (no variance) and a random one. This tells us two things; 1st is that distance from radar and warning verification is not an issue, otherwise the VMR would be higher since higher verification rates would be clustered closer to the radar. The second thing it tells us is that there is enough people reporting across the CWA to verify the warnings just as much as in the cities, though, citing the first quadrat test, the cities usually pick up more verifications per warning.

Spatial Analysis:
Tests:

Analysis of Variance (ANOVA):
jth

PARAMETERS:
Variables: Mean: Std. Dev.: n:
BeforeSR 0.4096 0.4918 354
BefSRAfDP 0.7723 0.4193 1019
After_DP 0.794 0.4044 301

RESULTS:
SS(between) = 37.8434
df(between) = 2
SS(within) = 313.435
df(within) = 1671
F = 100.876
p-value: 0
The probability reported is from the
calculated F-value toward the tail.


Parametric Independent Samples Difference of Means (Student's T)
jth

PARAMETERS:
Variable: BeforeSR BefSRAfDP
Mean: 0.4096 0.7723
Std. Dev.: 0.4918 0.4193
n: 354 1019
Used Pooled Variance Estimate

RESULTS:
df = 1371
t = -13.3782
p-values : 0 for a two-tailed test
0 for a lower-tailed test
The probability reported is from the calculated Z-value
(and its negative, for a two-tailed test) toward the tail(s).

Parametric Independent Samples Difference of Means (Student's T)
jth

PARAMETERS:
Variable: BefSRAfDP After_DP
Mean: 0.7723 0.794
Std. Dev.: 0.4193 0.4044
n: 1019 301
Used Pooled Variance Estimate

RESULTS:
df = 1318
t = -0.7944
p-values : 0.4271 for a two-tailed test
0.2136 for a lower-tailed test
The probability reported is from the calculated Z-value
(and its negative, for a two-tailed test) toward the tail(s).


Parametric Independent Samples Difference of Means (Student'sT)
jth

 

PARAMETERS:
Variable: BeforeSR After_DP
Mean: 0.4096 0.794
Std. Dev.: 0.4918 0.4044
n: 354 301
Used Pooled Variance Estimate

RESULTS:
df = 653
t = -10.7899
p-values : 0 for a two-tailed test
0 for a lower-tailed test
The probability reported is from the calculated Z-value
(and its negative, for a two-tailed test) toward the tail(s).

Analysis:
In studying the subjective (ie. True/False) warning verification statistics, I used 2 tests to prove the alternative hypothesis for each time period. The ANOVA test was conducted using the subjective records for every since warning grouped by time period. The test proved that the variability across the means of the groups had a large difference and the resulting p-value was zero. Thus we reject the null hypothesis which is that radar upgrades did not have an impact on warning verification and accept the alternative hypothesis and say that radar upgrades over the 3 upgrades did definitely have an impact on Warning verification.
I then ran 3 parametric Independent Student’s T tests for the correlation between the time intervals/ radar upgrades. For the original radar to the super-res upgrade, the t-value was -13 and p-value was 0, we reject the null and accept the alternative and say that the radar upgrade helped warning verification. For the radar upgrade from super-res to dual-pol, the t value was -.79 and the p-value was .21. Thus using a confidence interval of .1 we reject the alternative hypothesis and accept the null and say that the dual-pol upgrade did not have an impact on warning verification. Finally, the t-test between pre-super res and after the dual-pol upgrade. The t-value for this test was -10 with a p-value of 0, thus we reject the null and once again confirm the alternative hypothesis and say that the radar upgrades did have an impact on severe weather warning verification.

Thesis Conclusion:
The purpose of this project was to identify spatial and quantitative patterns and try to prove that radar upgrades over the past 8 years at the Sterling VA Weather Service office has impacted the verification of severe weather warnings for the better.
A study of the verified warnings weighted mean centers showed that each of the three time periods was close to the unweighted mean center and their standard distances were also nearly the same. This leads to the conclusion that warning verification rates are fairly evenly spread out and do not lean towards one direction or another, indicating that distance to the radar site or other factors such a population centers do not play a role in whether a warning gets verified or not.
Using a quadrat analysis of the centroids warning data and then analyzing them via a variance mean ratio we see that if we take a quantitative approach across all warnings, then the pattern is near random. With this test, the VMR proves that some areas report more severe occurrences per warning than other areas, and this makes sense because of the mix of rural and urban areas in the Sterling CWA given a 5 mi grid size. More research will be needed to prove this correlation. The subjective quadrat analysis proved that if the only thing that mattered was whether the warning was verified or not, then the distribution would be somewhere between random and perfectly dispersed. This shows that warnings are verified nearly equally nearly across all areas of the CWA. This eliminates the possibility of distance from the radar being a factor and backs up the data from the weighted centers analysis.
Quantitative tests show that over the period of 8 years significant improvements to the verification rates of warnings have been happening with a very strong correlation to advancements in radar technology. The ANOVA test proved that the three time periods have no relation to each other, with the chance of the three time periods being related when it comes to verification rates is 0%. For both the time periods between the original radar and the super-resolution and the original radar and the most recent upgrade, student t tests show that there is significant positive change in verification rates with the better upgrade. The only alternative hypothesis that was rejected was based on a students t test for the difference in rates from Super-Res to Dual-Pol, the P-value for this test was .21 which is still good but not significant given a confidence interval of .1. My hypothesis about this is that because dual-pol radar better estimates hail and has no effect on strong wind detection. It makes sense that these verification rates would be similar since more wind storms than hail storms occur in this part of the country.
In summary my research proves that radar upgrades over the past 8 years to the Sterling CWA has definitely improved severe weather warning verification rates and show random to dispersed distributions of verification rates, proving that potential issues such as distance to the radar and population do not have an impact on rates.


Other Data Explored:

-Question: What is the Relationship between Distance from the Radar and Warning Duration?
Alternative hypothesis: The further the warning was from the radar the longer the warning time, thus the assuming that the radar added uncertainty to the warning.

  • Conclusion: Using a bi-varate regression, it was found that no correlation exists between distance from the radar. The resulting r^2 value for this bi-varate regression was 0.00002, thus definitively rejecting the alternative hypothesis and accept the null.

post-741-0-12872300-1362026508_thumb.jpg

 

-Question: Is there a relationship between Warning Area and Warning Duration?
Alternative hypothesis: The larger the warning area the longer the warning duration. This assumes that a larger warning area would have a larger time because of the time taken for a storm to leave the warning area.
Conclusion: Using a bi-varate regression, it was found that little correlation between the two exist and the warning area only has ~10% influence on Warning duration. Thus we reject the Alternative and accept the Null Hypothesis.

 

post-741-0-52411000-1362026511_thumb.jpg

- Question: How rare was the June 2012 Derecho compared to all severe weather warnings, based on the number of reports recived inside warnings on that day?
Alternative Hyp: It was very rare.
Conclusion: Based on a z-value test with the population being all reports recived inside a warning polygon and the test stasistic being 140 reports. I rejected the null hypothesis and confirm the alternative hypotheis. This was verified based on the z-value being 27.73 and the p-value being 0.005.

Z-value
jth
PARAMETERS:
Variable: Obs_Inside
Mean: 2.0388
Std. Dev.: 4.975
n: 2062
Value: 140

RESULTS:
z (or t): 27.7311
prob: 0.5
The probability reported is from Z = 0 to the Z-value
entered, as if from the table in your textbook.

Link to comment
Share on other sites

I love how people still don't realize TS David '79 went through the area.... or that before one starts to analyze the 500 mb pattern during a hurricane season outbreak, ond might want to look at whether there was a tropical cyclone passing nearby triggering the entire event.

Of course we know how the Ivan outbreak worked out- just a reminder that some of the worse area-wide events have been a direct result of tropical cyclones.

Tropical sys are often good tor producers around here. Probably make up a pretty good percentage of overall. Harder to get strong tor in tropical sys comparatively.. Tho this area probably does better there than nearer landfall as dry air is usually being injected for extra instability.
Link to comment
Share on other sites

Tropical sys are often good tor producers around here. Probably make up a pretty good percentage of overall. Harder to get strong tor in tropical sys comparatively.. Tho this area probably does better there than nearer landfall as dry air is usually being injected for extra instability.

Yup, not ideal at all for F4's, but it's not like we've gotten many of across the area anyway...

We're pretty good at catching an F3 from a tropical system.

Link to comment
Share on other sites

  • 2 weeks later...

How are weathergods?

 

http://www.weathergodsinc.com/

 

My buddy who is not a weenie, has gone 3 years running and says they are good...he thinks

Never heard of them myself (then again, I haven't heard of about half the tour groups out there). $2600 for one person for 10 days is a reasonable/good price compared to the competition.

Link to comment
Share on other sites

I think they made it to New Mexico last year....anything good there you missed?

Depends on when.. think maybe one or two early May, then some nice storms like the 21st but mainly just over the border in TX. June 12 was probably the biggest day in NM last yr, tho I think any touchdowns were late. Last yr was pretty crappy in prime chase territory.

Link to comment
Share on other sites

if any of you have 25 to burn this is a cool package:

http://www.extremeinstability.com/stormanalysis101.htm

 

i got it on tue and have already watched prob 4.5 hours of it. there are three intro sets and then 4 or 5 'case studies' where he covers the setup then shows chase footage and breaks down what you're seeing.  obviously tuned to out west but some great info in there. he's not a met but he's really knowledgeable. sometimes i think the non mets who get to that level explain best.

Link to comment
Share on other sites

if any of you have 25 to burn this is a cool package:

http://www.extremeinstability.com/stormanalysis101.htm

 

i got it on tue and have already watched prob 4.5 hours of it. there are three intro sets and then 4 or 5 'case studies' where he covers the setup then shows chase footage and breaks down what you're seeing.  obviously tuned to out west but some great info in there. he's not a met but he's really knowledgeable. sometimes i think the non mets who get to that level explain best.

 

I was going to ask you about this after you posted the pic on twitter.. I'm not sure of just to buy it or smooch it off of you during the ride out west in May. :lol:

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...