Thursday, June 11, 2009

A Few Thoughts Regarding the HWT Spring Experiment – Mike Fowle

As others have mentioned, I want to thank the SPC and NSSL for coordinating the program once again this year. As a first year participant, I found the experience both challenging and rewarding. Although the overall magnitude/coverage of convective weather continued to be generally below normal – there were still plenty of forecast challenges to keep us busy throughout the week.

Now for some observations (mainly subjective) about a few of the issues we encountered:

1. Having completed a verification project on an early version of the MM5 (6KM horizontal grid spacing) back in the late 1990s, it was interesting to see the current evolution of mesoscale modeling. While there have been many changes/improvements (e.g. microphysics schemes, PBL schemes, radar assimilation, etc) it was evident to me that many of the same problems we encountered (sensitivity to initial conditions, sensitivity to model physics, parameterization of features, upscale growth, etc) still haunt this generation of mesoscale models.

2. With the increase in computer power, we are now able to run models with a horizontal grid resolution of 1km over a large domain in an operational setting! However, examining the 1km output did not seem to add much (if any) value over 4km models – especially considering the extra computational expense.

3. All of the high resolution models still appear to struggle when the synoptic scale forcing is weak. In other words, modeling convective evolution dominated by cold pool propagation remains extremely challenging.

4. The output from the high resolution models remains strongly correlated to that of the parent model used to initialize. Furthermore, if you don’t have the synoptic scale conditions reasonably well forecast, you have little hope in modeling the mesoscale with any accuracy.

5. Not surprising, each model cycle tended to produce a wide variety of solutions (especially during weak forcing regimes) – with seemingly little continuity amongst individual deterministic members (even with the same ICs), or from run to run. Sensitivity to ICs and the lack of spatial and temporal observations on the mesoscale remains a daunting issue!

Even with some of these issues, on most days the high resolution models still provided valuable guidance to forecasters – most notably regarding storm initiation, storm mode, and overall storm coverage. Although the location/timing of features may not be exactly correct, seeing the overall “character of the convection” can still be of great utility to forecasters especially considering they are not available in the current suite of operational models (i.e. NAM/GFS).

From a field office perspective – one of the big challenges I see in the future is how to best incorporate high resolution model guidance into the “forecast funnel.” Being that many forecasters already feel we are at (or even past!) the point of data overload, they need proof that these models can be of utility in the forecast process. Moreover, I believe that on an average day, most forecasters can/will devote at most 30-60 min to interrogate this guidance. Is this sufficient time? During the experiment we were devoting a few hours to evaluating the models – and I still felt we were only scratching the surface.

Next, what is the best method to view the data? A single deterministic run? Multiple deterministic runs? Probabalistic guidance from storm scale ensembles? Model post products (i.e. surrogate severe)? Some combination of the above? In addition, what fields give forecasters the most bang for the buck? Simulated reflectivity, divergence, winds, UH, updraft/downdraft strength? Obviously many of these questions have yet to be answered, however what is clear to me is that significant training is going to be required regarding both what to view, and how to view it.

In terms of verification, the object based methodology that DTC is developing is an interesting concept. Although still in its infancy, I like the idea and do see some definite utility. However, as we noted during the evaluation, it still appears as though this methodology may be best suited for a “case study” approach rather than an aggregate (i.e. seasonal) evaluation (at least at this point).

As echoed by others, it was a privilege to be a participant in this year’s program and I would jump at the opportunity to attend in future years. In my humble opinion, I think the mesoscale models have proved long ago that they do have utility in the forecast process – if used in the proper context. There are obvious challenges to embark upon in the years to come, and I look forward to seeing the continued evolution of techniques/technology in future years.

Monday, June 08, 2009

11-15 May 2009 Spring Experiment

I would like to thank SPC, NSSL, and others for the invitation to participate in the 2009 HWT. As most of you are aware (either through discussions with me or your experiences sitting at an airport for hours waiting for a delayed flight) the socioeconomic impacts of convective weather on the aviation industry are substantial. Attending the HWT is a way to grasp where the edge of the science is and establish a reality check for myself and to share with others as we work towards NextGen. It is an intriguing experience to learn from both operational forecaster feedback and our own forecasts we developed at HWT that the solution to increased accuracy does not correlate to higher model resolution as some might believe. Having an HWT week with an aviation focus is a good way for this research to get some exposure in a capacity that it may not have been designed for but illustrates potential utility and benefit.
I have pointed out things like Simulated Reflectivity from the HWT last year that has gained attention based on potential utility in using the data at a high level to get an understanding of what the National Airspace System scenario might look like for the day. Although it may never verify due its deterministic nature and is somewhat noisy it is good visual aid for an Air Traffic Flow Manager (non meteorologist) to get a quick frame of reference on potential systemic impacts.
Other forecasts like the Probability of >=40dbz intensity within 25 miles of point and the 18 member ensemble for 40dbz intensity could have enormous value for the aviation industry as 40dbz is also the level of intensity that aircraft no longer penetrate thus causing deviations and ultimately delays. Research on convective mode was another area I found myself intrigued with as convective mode from an aviation perspective provides a frame of reference to determine the permeability of the convective constraint. Discreet cells and linear convection can be equally disruptive but would be managed very differently if forecast with a high degree of skill so modal info for aviation is equally important as location and timing.
Thanks for a great week!

Sunday, June 07, 2009

Reid Hawkins' view of the June 1-5 HWT Spring Experiment

As I sit here at the Will Rogers Airport in Oklahoma City waiting on a plane that is 2 hours late, I wanted to reflect on my experiences with the 2009 HWT Spring Weather Experiment. This reflection will be in more in the style of stream of consciences so I hope someone out there can follow it.

1st a well deserved round of applause for Steve W, Jack, and Matt on steering us through the plethora of numerical models and the objective verification techniques of DTC. Our week started off with a rather well behaved and straight forward event over the northern Mississippi Valley into the northern plains. The second day was a highly frustrating forecast over Oklahoma, Kansas, and Northern Texas where overnight convection and gravity wave played havoc to the forecast and lack of convection over Oklahoma. The third day was even more frustrating with a weaker forced case south of an east-west stationary front from northern Virginia and back to Kentucky. The final case was a high plains case from Wyoming and Nebraska southward to the Texas Panhandle.

For our week of evaluating the models, my first impression was the number of models that provided a whole host of solutions. Through experience from the staff, they steered us to look at the reflectivity fields, outflows and updrafts instead of digging ourselves into a myriad of model fields that no one could have possibly looked at in the short time we had to prepare a forecast outlook. After shifting my paradigm to this style of forecasting which was somewhat uncomfortable, it was a comfortable feeling when we saw similar results from the models. This was not a common event as most of the cases were marginal or weakly forced.

One concern I have is that I did not see a huge bang for the money in the 1 km Caps model runs vs. the 4 km Caps models. There was a huge discussion about the assimilation of the data into the models and my thoughts are that until we sample the atmosphere with higher resolution, frequency and more accurately then I do not see where the higher scale models will provide better results for forecast operations. This is just an opinion and I hope the modelers can prove me wrong.

Another concern, I have is the way the data is displayed to the forecaster. With the wealth of data that is available and our current display techniques, I am afraid this has or will force many forecasters to find a comfort level of what data types to use. This means there may be valuable data sets to view but due to comfort level and time constraints these data sets may never be used.

Overall, the week was extremely enlightening on seeing the techniques that are being developed to help the forecaster. In time as the development envelop is pushed, I expect to see great information delivered to the operational desk. I am somewhat disheartened but not surprised on the lack of help we saw in weakly forced environments.