Saturday, May 28, 2011

This week in CI

The week was another potpourri of convection initiation challenges ranging from evening convection in WY/SD/ND/NE to afternoon in PA/NY back over to OK/TX/KS for a few days. We encountered many similar events as we had the previous week struggling with timing of the onset of convection. But we consistently can place good categorical outlooks over the region, and have consistently anticipated the correct location of first storms. I think the current perception is that we identify the mechanisms and thus the episodes of convection, but timing the features remains a big challenge. The models tend to not be consistent (at least in the aggregate) for at least two reasons: There is no weather event that is identical to any other, and the process by which CI occurs can vary considerably.

The processes that can lead to CI were discussed on Friday and include:
1. a sufficient lifting mechanism (e.g. a boundary),
2. sufficient instability in the column (e.g. CAPE),
3. instability that can be quickly realized (e.g. low level CAPE or weak CIN or low LCL or small LFC relative to the LCL),
4. a deep moist layer (e.g. reduced dry air entrainment),
5. a weakening cap (e.g. cooling aloft).

That is quite a few ingredients to consider quickly. Any errors in the models then can be amplified to either promote or hinder CI. In the last 2 weeks, we had at least similar simulations along the dryline in OK/TX where the models produced storms where none were observed. Only a few storms were produced by the model that were longer lasting, but the model also produced what we have called CI failure: where storms initiate but do not last very long. Using this information we can quickly assess that it was difficult for the model to produce storms in the aggregate. How we use this information remains a challenge, because storms were produced. It is quite difficult to verify the processes we are seeing in the model and thus either develop confidence in them or determine that the model is just prolific in developing some of these features.

What is becoming quite clear, is that we need far more output fields to adequately scrutinize the models. However, given the self imposed time constraints, we need a data visualization system that can handle lots of variables, perform calculations on the fly, and deal with many ensemble members. We have been introduced to the ALPS system from GSD and it seems to be up to the challenge for the rapid visualization and the unique display capabilities for which it was designed (e.g. large ensembles).

We also saw more of what the DTC is offering in terms of traditional verification, object based verification, and neighborhood object based verification. There is just so much to look at it, that it is overwhelming day to day. I hope to look through this in the post experiment analysis in great detail. There is alot of information buried in that data that is very useful (e.g. day to day) and will be useful (e.g. aggregate statistics). This is truly a good component of the experiment, but there is much work to be done to make it immediately relevant to forecasting, even though the traditional impact is post experiment. Helping every component fill an immediate niche is always a challenge. And that is what experiments are for: identifying challenges and finding creative ways to help forecasting efforts.

Thursday, May 26, 2011

Tornado Outbreak

I am posting late this week. It has been a wild ride in the HWT. The convection initiation desk has been active and Tuesday was no exception. The threat for a tornado outbreak was clear. The questions we faced for forecasting the initiation of storms were:
1. What time would the first storms form?
2. Where would they be?
3. How many episodes would there be?

This last question requires a little explanation. We always struggle with the criteria that denotes convection initiation. Likewise we struggle with how to define the multiple areas and multiple times at which deep moist convection initiates. This type of problem is "eliminated" when you issue a product for a long enough time period. Take the convective outlook for example. Since the risk is defined for the entire convective day you can account for the uncertainty in time by drawing a larger risk area and subsequently refining it. But as you narrow down your time window (from 1 day to 3 hours or even 1 hour) the problems can become significant.

In our case, the issue for the day was compounded because the dryline placement in the models was significantly east of the observed position by the time we started making our forecast. We attempted to account for this fact and as such had to adopt to a feature relative perspective of CI along the dryline. However, the mental picture you are assembling of the CI process (location, timing, number of episodes, number of storms) is tied not just to the boundaries you are considering, but the presumed environment in which they will form.

The feature relative environment then would necessarily be in error because we simply do not have enough observations to account for the model error. We did realize that shallow moisture, which was shown on morning soundings, was not going to be the environment in which our storms formed. Surface dew points were higher and staying near 68 in the warm sector. We later confirmed this with soundings at LMN which showed the moist layer increase in depth with time.

So we knew we had two areas of initial storm formation, one in the panhandle of OK and into KS along the cold front to the west and triple point to the east. The other area was along the dryline in OK and TX. We had to decide how far south storms would initiate. As we figuring all of this out, we had to look at the current satellite imagery since that was the only tool which was accounting for the correct dryline placement and estimate how far east it might travel, or mix out to in order to make the forecast.

Sure enough, the warm sector had multiple cloud streets ahead of the dryline. Our 4km model suite is not really capable of resolving cloud streets but we still needed to make our forecast roughly 1-2 hours before CI. So in a sense we were not making a forecast as much as we were making a longer more uncertain nowcast (probably not abnormal given the inherent unpredictability of warm season convection). Most people put the first storm in KS and would end up being quite accurate in placement. Some of us went ahead of the dryline in west central OK and were also correct.

There was one more episode in southern OK and then another in TX later on. This case will require some careful analysis to verify the forecast, other than subjective assessments. Today we got to see some of the potential objective methods via DTC, showing MODE plots of this case. The object identification of reflectivity via neighborhood and also merging and matching were quite interesting and should foster vigorous discussion.

Last but not least, the number of models we interrogated continued to increase, yet we were feeling confident in understanding this wide variety of models using all of the visualization tools including the more rapid web-based plots, and the use of the sub-hourly convectively active fields. We are getting quite good at distilling information from this very large dataset. There are so many opportunities for quantifying model skill that we will be busy for a long time.

It was interesting to be under the threat of tornadoes and to be in the forecast path of them. It was quite a day, especially since the remnant of the hook echo moved over Norman showering debris over the area picked up from the Goldsby Tornado. The NWC was roughly 3-5 miles away from the dissipation point of that Tornado.

Sunday, May 22, 2011

Quick Post

I have blogged here about scales of CI but this weekend was a great example.
Saturday:
These storms formed in close proximity to the dryline where the southern most supercell went up pretty quickly and the other to the North and West went up much slower, remained small and then only the closest storm to the supercell formed into one. But the contrast is obvious. Even after breaking the cap, the storms remained small for an hour or so, and a few remained small for 2.

Today, we saw turkey towers along the dryline for quite a while (2 hours-ish) in OK and then everything went up. But it is interesting to see the different scales, even at the "cloud scale" where things tend be uneven and random, skinny and wide, slow and fast. It makes you wonder what the atmospheric structure is, especially when our tools tell us the atmosphere is uncapped, but the storms just don't explode.

Looks like a pretty active southern Plains week is just beginning, as evidenced by the 43 tornado reports today and the 20 yesterday.