mimillman Posted March 1, 2015 Share Posted March 1, 2015 The past month, I've been working extensively with the Computer Science department here at the University of Chicago on a project that might be of interest to you. The original motivation for the research proposal came form this forum, so I thought I'd share it with you. We've been collecting data through an automated process that collects model data twice a day (00z and 12z runs) and validates them up to 24 out with observed surface data. The software, written in Python, then stores the data in a University server so it can be accessed for analysis. The parameters we are analyzing are: 1. Temperature 2. Pressure (in mb) 3. Dewpoint 4. Relative Humidity 5. Wind direction 6. Wind speed The idea is for meteorologists to be able to visualize more clearly where there is error between the model data and the surface data, and how this error may propagate over time. We are just now beginning to create a user interface that will allow the user to enter the parameter they're interested in, the time period they're interested in, and a date, which will print a map similar to the one I've attached in this post. We also would like to perform a dyadic partition, followed by a cluster analysis, on the data presented below. We will most likely be using C to perform the cluster analysis. If you are more interested in the software, please send me a PM. If you have any suggestions for our research, please post and let me know! The map below depicts initialization error with today's 12z GFS run, regarding 10m temperature. The error threshold I've set is 5.0 F. Any point on the map in blue represents where the model data overestimated temperature by more than 5 degrees at initialization. The red areas on the map represent where the model data underestimated temperature by more than 5 degrees. The yellow points mean that error fell below the 5 degree threshold. Enjoy! Link to comment Share on other sites More sharing options...
radarman Posted March 1, 2015 Share Posted March 1, 2015 I did nearly the exact same thing back in 2003 with an interpolation scheme for distance between grid cornerpoints and correct for height ASL. FWIW I got very similar results with respect to high errors in mountainous regions, even after the corrections were applied. (and I paid particular attn to the corrections because right away I suspected they were erroneous) Of course I was using fortran and you have advanced to python... jealous Link to comment Share on other sites More sharing options...
mimillman Posted March 17, 2015 Author Share Posted March 17, 2015 Here's some progress: These parameters are entered for KORD, 00z GFS 12 hour forecast on 3/17, looking at Temperature. The first attachment is normalized error (red means surface was warmer than modeled). The second attachment is a kernel density estimate for cluster analysis. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.