wxmeddler Posted February 28, 2013 Share Posted February 28, 2013 I may have to break out the big guns tonight in my GIS arsenal. I forgot to post the results of my Fall project. Link to comment Share on other sites More sharing options...
TUweathermanDD Posted February 28, 2013 Share Posted February 28, 2013 Okay Ian, please give up the goods on how you got the boxes in there! Link to comment Share on other sites More sharing options...
Ian Posted February 28, 2013 Share Posted February 28, 2013 I may have to break out the big guns tonight in my GIS arsenal. I forgot to post the results of my Fall project. you get them off IEM: http://mesonet.agron.iastate.edu/request/gis/watchwarn.phtml in the prog i linked go to settings-->project properties-->enable on the fly Link to comment Share on other sites More sharing options...
gymengineer Posted February 28, 2013 Share Posted February 28, 2013 I love how people still don't realize TS David '79 went through the area.... or that before one starts to analyze the 500 mb pattern during a hurricane season outbreak, ond might want to look at whether there was a tropical cyclone passing nearby triggering the entire event. Of course we know how the Ivan outbreak worked out- just a reminder that some of the worse area-wide events have been a direct result of tropical cyclones. Link to comment Share on other sites More sharing options...
wxmeddler Posted February 28, 2013 Share Posted February 28, 2013 I can't have Ian have all the fun.. I'm Bustin' out the GIS: This was my Fall 2012, Quanitative and Spatial Analysis Final Project. I recived a 93% on the project and the points taken off were mostly due to the professor not understanding some of the lingo/ why it's important. If you have any questions just ask...There's ALOT of statistics here, but the text should clear things up.. If your a Pictures and TLDR person, just read the Thesis and Thesis Conclusion. Please ask permission before re-distributing and always give credit!Quantitative & Spatial AnalysisGeog. 292 , Millersville Universitywxmeddler12/7/2012 Major ProjectPurpose/Definitions:The purpose of this project is to validate hypothesized theories using statistical and geographical means relating to severe weather warning verification, specifically in the National Weather Service Sterling (LWX) County Warning Area (CWA). The definition of verification for the purposes of this project is if a storm report was submitted inside the warning polygon and within the warning issuance and expiration time.The National Weather Service defines a Severe Thunderstorm warning as:“when either a severe thunderstorm is indicated by the WSR-88D radar or a spotter reports a thunderstorm producing hail 3/4 inch or larger in diameter and/or winds equal or exceed 58 miles an hour.” (NWS Glossary)Thesis:To analyze spatial and quantitative patterns to answer questions about the relationship between severe thunderstorm weather warnings and severe thunderstorm weather reports in the Sterling CWA. To answer if radar improvements have increased the success rate between warning and severe weather reporting, and whether factors such as population centers and distance from the radar site impact verification rates.Methods:The warning and severe weather report datasets were obtained from the Iowa State Department of Agronomy. The datasets downloaded included every severe weather warning since 1980 and every severe weather report since 2003. To narrow down the dataset, only warnings using the National Weather Service’s Polygon based warnings were used (Post ~03/2006). The severe weather reports were also trimmed to meet this criteria. Both datasets were further reduced to only take account for severe weather warnings, excluding Tornadoes and Flash Flooding (ie, no reports or warnings concerning winter storms, hurricanes, flooding or tornadoes).For the purposes of creating a dataset that included warning verification the storm reports and warning data had to be merged. For the purposes of identifying spatial patterns, centroids were used to symbolize the warnings on the map. In total 5,127 storm reports and 2063 warnings were merged. Merging was done via a mid string query for the date/time field and converted into an continuous format given as days since 1/1/1900 which was completed using Microsoft Excel. The dataset was then put back into ArcGIS where python was used in conjunction with a spatial join based on location. This gave a field of every polygon and how many severe weather reports verified within the time and space of that polygon. Another field was created via an export to excel then imported back into the database, this ran an IF statement saying that if the number of verified reports is greater than zero then TRUE, if not, then FALSE. For the purposes of quantitative and spatial analysis purposes, the True/False field was translated into either a zero or one to signify verified vs. non-verified.The dataset was then split into 3 separate time periods signifying each radar upgrade period. From the beginning of the dataset (~03/2006) to July 15, 2008 was the time of the original non-modified radar. This period is noted in the dataset(s) as “beforeSR” or “Before Super-Res”. From July 15, 2008 to February 27, 2012 was the time of the Super-Resolution (SR) Doppler Radar, this increased the spatial resolution of the radar to pick out finer details in storms. This was particularly helpful in strong wind event where smaller wind vortices were now being detected. In the dataset this time period is denoted as “BeforeDP / After Super-Res/SR) or something to that extent. Post February 27, 2012 is the time period of the Dual-Polarization radar. This increased the capability of the radar to determine the size and shape of what is falling. This is useful in identifying hail from heavy rain. This time period is denoted as “After Dual-Pol” or “After DP” or something to that extent in the dataset.All Storm Reports:All Warning Centroids:Spatial Analysis:After dividing via time the list of all the warning joins in ArcGIS the separate time period tables were exported and put into GeoStats. GeoStats was then used to calculate the mean, mean, weighed variables and standard deviations. The different time periods were then conglomerated together in one excel spread sheet and the latitude and longitude were converted back into feet from miles since issues arose in calculating such large numbers in GeoStats with the coordinate system. The Spatial coordinate system for all datasets was NAD_1983_HARN_StatePlane_Maryland. The resultant table is listed in the tables section, those used in the map are highlighted:Weighted Centers:The map below shows each time periods weighted verified mean location (colored dots) as well as their standard distances (corresponding colored circles). Also shown on the map is the unweighted mean across all polygons (blue star). The weighted verified mean location was taken from the true/false field, so that if the mean was in a certain direction away from the unweighted mean, then there would be a higher verification rate of warnings for that time period in that direction. The results show that the dots are fairly tightly clustered around the unweighted mean. This means that verification rates are fairly uniform in all directions. Interestingly, the weighted time period closest to the unweighted mean is the After Super-Res time period, which also had the most data points attributed to it. This means that most likely if there was more data for the other two time periods, they would gravitate to the unweighted mean also, thus reinforcing the idea that the verification is directionally distributed equally. The standard distances also are remarkably similar in size, showing that spatial distributions of the verified polygons are all relatively equal. Quadrat Analysis :In the Quadrat analysis I explored the spatial variance in the data using Variance Mean Ratio (VMR) test for the true/false verification data to see whether the verification areas are random, clustered, or dispersed. In the map below I have placed every warning’s centroids and colored it based on whether that individual warning verified or not. The shaded blocks represent the verification rate in that particular cell. The cells were made by running a fishnet tool and creating a grid then spatially joining the warnings to the grid and taking the average for each grid cell. In total there are 2036 centroids and 816 grid squares. Quadrat Results: All warnings, including multiple reports per warningVMR= 0.560276 Variance 1.415795Mean 2.526961Verified Warnings T/FVMR= 0.209629 Variance 0.529724Mean 2.526961The statistic we are looking at for some meaning to the map above is the VMR. A VMR value of one represents a highly clustered spatial dispersion while a VMR value of zero represents a random distribution. Looking at the results of the VMR looking across all warnings, including multiple reports per warning, we see that the spatial pattern had moderate variance, in other words, the pattern is random. This makes sense because the Sterling CWA is composed of about half urban and half rural land so a ~.50 VMR is about right. If we take the quantitative data out of the warnings and make it purely subjective (ie. True/False) we see that warnings verify at a pattern between a perfect dispersed pattern (no variance) and a random one. This tells us two things; 1st is that distance from radar and warning verification is not an issue, otherwise the VMR would be higher since higher verification rates would be clustered closer to the radar. The second thing it tells us is that there is enough people reporting across the CWA to verify the warnings just as much as in the cities, though, citing the first quadrat test, the cities usually pick up more verifications per warning.Spatial Analysis:Tests:Analysis of Variance (ANOVA):jthPARAMETERS:Variables: Mean: Std. Dev.: n:BeforeSR 0.4096 0.4918 354BefSRAfDP 0.7723 0.4193 1019After_DP 0.794 0.4044 301RESULTS:SS(between) = 37.8434df(between) = 2SS(within) = 313.435df(within) = 1671F = 100.876p-value: 0The probability reported is from thecalculated F-value toward the tail.Parametric Independent Samples Difference of Means (Student's T)jthPARAMETERS:Variable: BeforeSR BefSRAfDPMean: 0.4096 0.7723Std. Dev.: 0.4918 0.4193n: 354 1019Used Pooled Variance EstimateRESULTS:df = 1371t = -13.3782p-values : 0 for a two-tailed test0 for a lower-tailed testThe probability reported is from the calculated Z-value(and its negative, for a two-tailed test) toward the tail(s).Parametric Independent Samples Difference of Means (Student's T)jthPARAMETERS:Variable: BefSRAfDP After_DPMean: 0.7723 0.794Std. Dev.: 0.4193 0.4044n: 1019 301Used Pooled Variance EstimateRESULTS:df = 1318t = -0.7944p-values : 0.4271 for a two-tailed test0.2136 for a lower-tailed testThe probability reported is from the calculated Z-value(and its negative, for a two-tailed test) toward the tail(s). Parametric Independent Samples Difference of Means (Student'sT)jth PARAMETERS:Variable: BeforeSR After_DPMean: 0.4096 0.794Std. Dev.: 0.4918 0.4044n: 354 301Used Pooled Variance EstimateRESULTS:df = 653t = -10.7899p-values : 0 for a two-tailed test0 for a lower-tailed testThe probability reported is from the calculated Z-value(and its negative, for a two-tailed test) toward the tail(s).Analysis:In studying the subjective (ie. True/False) warning verification statistics, I used 2 tests to prove the alternative hypothesis for each time period. The ANOVA test was conducted using the subjective records for every since warning grouped by time period. The test proved that the variability across the means of the groups had a large difference and the resulting p-value was zero. Thus we reject the null hypothesis which is that radar upgrades did not have an impact on warning verification and accept the alternative hypothesis and say that radar upgrades over the 3 upgrades did definitely have an impact on Warning verification.I then ran 3 parametric Independent Student’s T tests for the correlation between the time intervals/ radar upgrades. For the original radar to the super-res upgrade, the t-value was -13 and p-value was 0, we reject the null and accept the alternative and say that the radar upgrade helped warning verification. For the radar upgrade from super-res to dual-pol, the t value was -.79 and the p-value was .21. Thus using a confidence interval of .1 we reject the alternative hypothesis and accept the null and say that the dual-pol upgrade did not have an impact on warning verification. Finally, the t-test between pre-super res and after the dual-pol upgrade. The t-value for this test was -10 with a p-value of 0, thus we reject the null and once again confirm the alternative hypothesis and say that the radar upgrades did have an impact on severe weather warning verification.Thesis Conclusion:The purpose of this project was to identify spatial and quantitative patterns and try to prove that radar upgrades over the past 8 years at the Sterling VA Weather Service office has impacted the verification of severe weather warnings for the better.A study of the verified warnings weighted mean centers showed that each of the three time periods was close to the unweighted mean center and their standard distances were also nearly the same. This leads to the conclusion that warning verification rates are fairly evenly spread out and do not lean towards one direction or another, indicating that distance to the radar site or other factors such a population centers do not play a role in whether a warning gets verified or not.Using a quadrat analysis of the centroids warning data and then analyzing them via a variance mean ratio we see that if we take a quantitative approach across all warnings, then the pattern is near random. With this test, the VMR proves that some areas report more severe occurrences per warning than other areas, and this makes sense because of the mix of rural and urban areas in the Sterling CWA given a 5 mi grid size. More research will be needed to prove this correlation. The subjective quadrat analysis proved that if the only thing that mattered was whether the warning was verified or not, then the distribution would be somewhere between random and perfectly dispersed. This shows that warnings are verified nearly equally nearly across all areas of the CWA. This eliminates the possibility of distance from the radar being a factor and backs up the data from the weighted centers analysis.Quantitative tests show that over the period of 8 years significant improvements to the verification rates of warnings have been happening with a very strong correlation to advancements in radar technology. The ANOVA test proved that the three time periods have no relation to each other, with the chance of the three time periods being related when it comes to verification rates is 0%. For both the time periods between the original radar and the super-resolution and the original radar and the most recent upgrade, student t tests show that there is significant positive change in verification rates with the better upgrade. The only alternative hypothesis that was rejected was based on a students t test for the difference in rates from Super-Res to Dual-Pol, the P-value for this test was .21 which is still good but not significant given a confidence interval of .1. My hypothesis about this is that because dual-pol radar better estimates hail and has no effect on strong wind detection. It makes sense that these verification rates would be similar since more wind storms than hail storms occur in this part of the country.In summary my research proves that radar upgrades over the past 8 years to the Sterling CWA has definitely improved severe weather warning verification rates and show random to dispersed distributions of verification rates, proving that potential issues such as distance to the radar and population do not have an impact on rates.Other Data Explored:-Question: What is the Relationship between Distance from the Radar and Warning Duration?Alternative hypothesis: The further the warning was from the radar the longer the warning time, thus the assuming that the radar added uncertainty to the warning. Conclusion: Using a bi-varate regression, it was found that no correlation exists between distance from the radar. The resulting r^2 value for this bi-varate regression was 0.00002, thus definitively rejecting the alternative hypothesis and accept the null. -Question: Is there a relationship between Warning Area and Warning Duration?Alternative hypothesis: The larger the warning area the longer the warning duration. This assumes that a larger warning area would have a larger time because of the time taken for a storm to leave the warning area.Conclusion: Using a bi-varate regression, it was found that little correlation between the two exist and the warning area only has ~10% influence on Warning duration. Thus we reject the Alternative and accept the Null Hypothesis. - Question: How rare was the June 2012 Derecho compared to all severe weather warnings, based on the number of reports recived inside warnings on that day?Alternative Hyp: It was very rare.Conclusion: Based on a z-value test with the population being all reports recived inside a warning polygon and the test stasistic being 140 reports. I rejected the null hypothesis and confirm the alternative hypotheis. This was verified based on the z-value being 27.73 and the p-value being 0.005.Z-valuejthPARAMETERS:Variable: Obs_InsideMean: 2.0388Std. Dev.: 4.975n: 2062Value: 140RESULTS:z (or t): 27.7311prob: 0.5The probability reported is from Z = 0 to the Z-valueentered, as if from the table in your textbook. Link to comment Share on other sites More sharing options...
Ian Posted February 28, 2013 Share Posted February 28, 2013 I love how people still don't realize TS David '79 went through the area.... or that before one starts to analyze the 500 mb pattern during a hurricane season outbreak, ond might want to look at whether there was a tropical cyclone passing nearby triggering the entire event. Of course we know how the Ivan outbreak worked out- just a reminder that some of the worse area-wide events have been a direct result of tropical cyclones. Tropical sys are often good tor producers around here. Probably make up a pretty good percentage of overall. Harder to get strong tor in tropical sys comparatively.. Tho this area probably does better there than nearer landfall as dry air is usually being injected for extra instability. Link to comment Share on other sites More sharing options...
Ian Posted February 28, 2013 Share Posted February 28, 2013 Looks interesting JT will have to read it when I can better take it in. Link to comment Share on other sites More sharing options...
mattie g Posted February 28, 2013 Share Posted February 28, 2013 I had a dream I was in a tornado. Link to comment Share on other sites More sharing options...
mappy Posted February 28, 2013 Share Posted February 28, 2013 MAPS!!!!!!!!! Awesome report, JT... most of the weather/radar stuff went over my head but the GIS portion was spot on. Seriously, we should just have a map thread for us map weenies. Link to comment Share on other sites More sharing options...
mappy Posted February 28, 2013 Share Posted February 28, 2013 I feel like adding at least one map to the mix Map of EF4/EF5 tornadoes from the April 2011 outbreak. Was made for ustornadoes.com Link to comment Share on other sites More sharing options...
gymengineer Posted February 28, 2013 Share Posted February 28, 2013 Tropical sys are often good tor producers around here. Probably make up a pretty good percentage of overall. Harder to get strong tor in tropical sys comparatively.. Tho this area probably does better there than nearer landfall as dry air is usually being injected for extra instability. Yup, not ideal at all for F4's, but it's not like we've gotten many of across the area anyway... We're pretty good at catching an F3 from a tropical system. Link to comment Share on other sites More sharing options...
Ian Posted March 1, 2013 Share Posted March 1, 2013 mark's thoughts on met spring tornadoes in the us http://www.ustornadoes.com/2013/02/28/spring-2013-seasonal-forecast/ Link to comment Share on other sites More sharing options...
Disc Posted March 1, 2013 Share Posted March 1, 2013 Nice write up. I hope he's right. My guess of the first high risk day is March 25 in the Central/Western States guess thread. Link to comment Share on other sites More sharing options...
Kmlwx Posted March 6, 2013 Author Share Posted March 6, 2013 Ok I'm all in on severe now. If we get another winter thing to track I might behead myself. I will now start watching the LR like a hawk for even a hint of an event. Link to comment Share on other sites More sharing options...
Bob Chill Posted March 6, 2013 Share Posted March 6, 2013 Anything worth tracking yet? Link to comment Share on other sites More sharing options...
Ian Posted March 6, 2013 Share Posted March 6, 2013 Anything worth tracking yet? Might be a legit Plains threat this weekend.. Woo storms. Link to comment Share on other sites More sharing options...
WxUSAF Posted March 7, 2013 Share Posted March 7, 2013 Can't wait for a couple rumbles of thunder, 10 mins of mod rain and 15kt gusts come May. Bring it! Link to comment Share on other sites More sharing options...
Kmlwx Posted March 7, 2013 Author Share Posted March 7, 2013 Might be a legit Plains threat this weekend.. Woo storms. Bob Chill should hone his forecasting skills on that event. I would try my hand if I wasn't so busy. Link to comment Share on other sites More sharing options...
andyhb Posted March 11, 2013 Share Posted March 11, 2013 US Violent Tornadoes 1900-1999, restored by yours truly. http://pdfuploader.com/uppdfs/601/F4F5_Tornadoes_1900-1999.pdf Link to comment Share on other sites More sharing options...
North Balti Zen Posted March 20, 2013 Share Posted March 20, 2013 I miss thunderstorms. Link to comment Share on other sites More sharing options...
Deck Pic Posted March 22, 2013 Share Posted March 22, 2013 How are weathergods? http://www.weathergodsinc.com/ My buddy who is not a weenie, has gone 3 years running and says they are good...he thinks Link to comment Share on other sites More sharing options...
Ellinwood Posted March 22, 2013 Share Posted March 22, 2013 How are weathergods? http://www.weathergodsinc.com/ My buddy who is not a weenie, has gone 3 years running and says they are good...he thinks Never heard of them myself (then again, I haven't heard of about half the tour groups out there). $2600 for one person for 10 days is a reasonable/good price compared to the competition. Link to comment Share on other sites More sharing options...
Ian Posted March 22, 2013 Share Posted March 22, 2013 The 100% success rate sounds a bit questionable. Link to comment Share on other sites More sharing options...
Ellinwood Posted March 22, 2013 Share Posted March 22, 2013 The 100% success rate sounds a bit questionable. They need to update their 2012 logs, but http://www.weathergodsinc.com/chaselogs.asp Link to comment Share on other sites More sharing options...
Ian Posted March 22, 2013 Share Posted March 22, 2013 They need to update their 2012 logs, but http://www.weathergodsinc.com/chaselogs.asp Guess they only tour in May but looks also like maybe they only chase good days. Link to comment Share on other sites More sharing options...
Deck Pic Posted March 22, 2013 Share Posted March 22, 2013 Guess they only tour in May but looks also like maybe they only chase good days. I think they made it to New Mexico last year....anything good there you missed? Link to comment Share on other sites More sharing options...
Ian Posted March 22, 2013 Share Posted March 22, 2013 I think they made it to New Mexico last year....anything good there you missed? Depends on when.. think maybe one or two early May, then some nice storms like the 21st but mainly just over the border in TX. June 12 was probably the biggest day in NM last yr, tho I think any touchdowns were late. Last yr was pretty crappy in prime chase territory. Link to comment Share on other sites More sharing options...
Ian Posted March 24, 2013 Share Posted March 24, 2013 if any of you have 25 to burn this is a cool package: http://www.extremeinstability.com/stormanalysis101.htm i got it on tue and have already watched prob 4.5 hours of it. there are three intro sets and then 4 or 5 'case studies' where he covers the setup then shows chase footage and breaks down what you're seeing. obviously tuned to out west but some great info in there. he's not a met but he's really knowledgeable. sometimes i think the non mets who get to that level explain best. Link to comment Share on other sites More sharing options...
wxmeddler Posted March 24, 2013 Share Posted March 24, 2013 if any of you have 25 to burn this is a cool package: http://www.extremeinstability.com/stormanalysis101.htm i got it on tue and have already watched prob 4.5 hours of it. there are three intro sets and then 4 or 5 'case studies' where he covers the setup then shows chase footage and breaks down what you're seeing. obviously tuned to out west but some great info in there. he's not a met but he's really knowledgeable. sometimes i think the non mets who get to that level explain best. I was going to ask you about this after you posted the pic on twitter.. I'm not sure of just to buy it or smooch it off of you during the ride out west in May. Link to comment Share on other sites More sharing options...
Kmlwx Posted March 24, 2013 Author Share Posted March 24, 2013 I was going to ask you about this after you posted the pic on twitter.. I'm not sure of just to buy it or smooch it off of you during the ride out west in May. You're going to try to get Ian to loan it to you by kissing him Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.