Tuesday, June 12, 2012

Leveraging the known

I left off the last post indicating that we know model biases, at least for the models that are coarser in grid spacing. But that is a lie.



The NAM is new. It was released last September-ish, on a new grid (so called B grid), with a different dynamic core as a result (or vice versa, whichever). The NAM 4km nest uses a version of the BMJ convection scheme.

Soon the SREF will be upgraded from ~32km to ~16km grid spacing with the removal of older dynamic cores (ETA and RSM) and be replaced with the NMMB and NMM cores. 

The models are evolving probably at a rate of at least 1 major upgrade per year. That means that for each modeling system it is difficult if not impossible to construct a data set able to be calibrated specifically for severe weather. And what about constructing meaningful characteristics or statistics? By the time you can complete the work, communicate it, and get people to apply it ... the model has changed again.

This is a formidable challenge. Perhaps 1 full year is just enough to calibrate, given the right mix of events and a good number of events, too. There is no easy answer here. The computer revolution makes it easier to find errors or bugs, to correct them, but also easier to make mistakes and correct for them by "fixing" the wrong things.

You can see how this issue clearly gets amplified when dealing with ensembles, especially multi-model and really for any mix of physics whether multi-model or not. We almost have to choose between being a jack of all trades modeling system or a set of highly specialized models. 

Then the question is what is the update frequency or even the update severity. More frequent updates would actually necessitate running the model more, since you would have to do parallel runs for calibration purposes. More significant updates to the models would increase this requirement even more.

Forecasters and even researchers develop skills at figuring out what the models are good at. We must develop that expertise and pair it with strategies to leverage the guidance. Forecasters are good at gut checks when relating the observations with severe storm environments and comparing with model forecasts. We should not, as a conscious choice, sacrifice these forecaster skills. Sometime developing expertise means necessarily developing an expert team not a team of experts. Pairing well known models even with biases with skilled forecasters is the former.

No comments: