Today was an interesting day as we had a joint decision to pick our domain of where we would collectively issue our forecasts. It was decided the clean slate CI and severe domain would be in south Texas. According to the models this area would potentially result in multiple types of potential severe weather (outside the stronger flow pulse severe was possible and further north in the frontal zone area, possible behind, where the flow and shear were stronger, more organized threat) as well as multiple triggers for CI (along the cold front moving south, the higher terrain in Mexico into NM, as well as potential along the sea breeze near Houston).
It was increasingly clear that adding value, by moving from course to high temporal resolution, is difficult because of how accurate we are requiring the models to be. The model capability may be good by simulating the correct convective mode and evolution, but getting that result at the right time and in the right place, will still determine the goodness of the forecast. So no matter the kind of spatial or temporal smoothing we apply to derive probabilities we are still at the mercy of the processes in the model that can be early or late and thus displaced, or displaced and increasingly incorrect in timing. This is not new mind you, but it underscores the difference between capability and skill.
In the forecast setting, with operational timeliness requirements, there is little room for only capability. This is not to say that such models don't get used, it just means that they have little utility. The operational forecasters are skilled with available guidance so you can't just put models with unknown skill in their laps and expect it to have immediate high impact (value). The strengths and weaknesses need to be evaluated. We do this in the experiment by relating the subjective impressions of participants to objective skill score measures.
And we do critically evaluate them. But let me be clear. Probabilities never tell the whole story. The model processes can be as important to generating forecaster confidence in model solutions. This is because the details can be used as evidence to support or refute processes that can be observed. Finding clues for CI is rather difficult because the boundary layer is the least well observed. We have surface observations which can be a proxy for boundary layer processes, but not everything that happens in the boundary layer happens at the surface.
A similar situation happens for the severe weather component. We can see storms by interrogating model reflectivity but large reflectivity values are not highly correlated with severe weather. We don't necessarily even know if the rotating storms in the model are surface based which would yield a higher threat than say elevated strong storms. Efforts to use additional fields as conditional proxies with the severe variables are underway. These take time to evaluate and refine before we can incorporate them into probability fields. Again these methods can be used to derive evidence that a particular region is favored or not for severe weather.
Coming back to our forecast for today there was evidence for both elevated storms and surface based organized storms, and evidence to suggest that the cold front may not be the initiator of storms even though it was in close proximity. We will verify our forecasts in the morning and see if we can make some sense out of all the data, in the hopes of finding some semblance of signal that stands out above the noise.
Extending the conversation about real-time high-resolution convection-allowing modeling.
Monday, May 07, 2012
2012 HWT-EFP
Today is the first official day of the Hazardous Weather Testbed Experimental Forecast Programs' Spring Experiment. We will have two official desks this year: Severe and Convection Initiation. Both desks will be exploring the use of high-resolution convection-permitting models in making forecasts which include on the severe side, the total severe storms probabilities of the Day 1 1630 convective outlook and then 3 forecast periods similar to the enhanced thunder (20-00, 00-04, and 04-12 UTC), while on the CI side they will make forecasts of CI and convection coverage for 3 four periods (16-20,20-00, 00-04 UTC).
We have 3 ensembles that will be used heavily: the so-called Storm Scale ensemble of opportunity (SSEO; 7 member including the NSSL-WRF, NMM-B Nest, and the hi-res window runs including 2 time lagged members), AFWA (Air Force 10 member), and SSEF (CAPS 12 member).
We will be updating throughout the week as events unfold (not necessarily in real time) and will try to put together a week in review. Let the forecasting begin.
We have 3 ensembles that will be used heavily: the so-called Storm Scale ensemble of opportunity (SSEO; 7 member including the NSSL-WRF, NMM-B Nest, and the hi-res window runs including 2 time lagged members), AFWA (Air Force 10 member), and SSEF (CAPS 12 member).
We will be updating throughout the week as events unfold (not necessarily in real time) and will try to put together a week in review. Let the forecasting begin.
Subscribe to:
Posts (Atom)