Wednesday, May 18, 2016

High-Resolution Verification Challenges

Afternoons at the Spring Forecasting Experiment involve some forecasting activities, but much of the time is spent doing evaluations. These differ between the two desks and from year to year, as we subjectively verify cutting-edge diagnostics and model updates. Specifics on this years' verifications will be the feature of a future blog post. Inevitably, during the evaluations aspects of both the products we're evaluating and the evaluations themselves are discussed, and even the facilitators don't always have easy answers to the questions participants raise. But that's part of the informal atmosphere (pun somewhat intended) that leads to better understanding between forecasters and researchers, and allows for the communities to brainstorm improvements. Today, such a question came up on the total severe desk: how do we verify high-resolution forecasts when we don't have high enough resolution in our observations?

The feature that brought this question to the forefront was a cold pool that developed yesterday, 17 May 2016, and surged southward. While completing the evaluation of CLUE members with different microphysics, participants considered reflectivity, updraft helicity, temperature, dew point, and CAPE. Reflectivity is a fairly 1-to-1 comparison - the features seen in the reflectivity are at approximately the same scale in both the model and the observations. 

However, in the temperature field, the models look characteristically different than the observations, which are based on the RUC reanalysis:

In the models, the cold pool in the center of Texas is evident, in the region of 65-70 degree temperatures within a larger air mass that was in the upper 70's. While the CAMs at this time had the MCS further south than the observations (which are in the lower right panel), a cold pool is detectable in the temperature fields of both the models and the observations. The smoothing level in the observations, however, is much coarser than in the models and makes the cold pool much less obvious.

Our participants had a few suggestions today on how to make these differences between the forecsts and observations more understandable, such as by generating a spatial correlation between each model and the observations. Even with this alternate display, a dearth of high-resolution observations makes it much more difficult to say which model best captured the mesoscale environmental characteristics of yesterday's MCS. 

As I've mentioned in prior posts, that we are concerned about the finer-scale environmental representation speaks well to the progress of models in capturing larger scale features. Indeed, if I squint or stand too far from the screen, I'm hard-pressed to detect large differences both amongst the models and between the models and the observations. In fact, participants often comment that this set of members is quite similar. As we progress into subjectively evaluating these types of fields, though, consideration of how best to gauge differences is one of the challenges future SFEs will have to face. Through discussion with the participants, ideas for how to accomplish future verifications are generated, and what is a brief comment this year may be realized in future experiments. 

No comments: