Wednesday, June 08, 2016

SFE 2016 Wrap Up

Well, last week concluded SFE 2016. This season was a particularly interesting one. While we always deal with some marginal cases and mesoscale forcing as the mechanism for severe convection, this year seemed to feature many of those cases. Lots of days throughout the experiment were a bit difficult to forecast conceptually, even the high-end days such as 26 May. While the full period forecasts were easier, breaking down the full period into specific four-hour chunks proved challenging, given that these forecasts contained both a forecast of convective initiation/intensification (if the convection was ongoing) of severe storms, as well as the motion and evolution of those storms (i.e., would supercells form and merge into an MCS? Would morning convection reintensify?). Each of those elements is a forecast challenge separately, but we combined them into one.

In a way, it's ideal that we faced so many of these environments. We've seen in past SFEs that when the CAMs are strongly forced, they often do quite well at pinpointing the location and intensity of severe convection. Where do they have the most difficulty? Under weaker forcing, when remnant outflow boundaries and mesoscale details have a large influence on the day's convection. To have a 65-member CAM ensemble in the CLUE operating during these environments may give us unparalleled insight to what CAM ensemble design characteristics perform best under uncertain circumstances, and can augment the deterministic guidance that is already operational. While we may have come into most days looking at only a small area where CAPE, shear, and a lifting mechanism were present, this set of days will provide us with many case studies of realistic, less-than-ideal circumstances.

As always, a huge thanks goes out to our participants, who hailed from multiple countries and states. We gathered a number of subjective impressions from these participants on various subsets of the CLUE, illustrating forecaster and researcher insights about how these CAMs may best be applied. In the case of the isochrones, this year's comments will help design a better, more user-friendly product and introduction to the concept for next year.

Two great challenges lie ahead: the verification and analysis of the massive amount of data generated and collected during SFE 2016, and the planning of SFE 2017. Such is the cycle of an annual experiment - the work is never done. Onward!

Wednesday, June 01, 2016

Chopping the FAR

Afternoons in the SFE are composed of three main parts: A Day 2 forecast, evaluations of various aspects of the CAMs, and updates to the morning forecasts. Sometimes, very little new information contributes to these updates, particularly if convection has not initiated by the time of the update. Other days, convective initiation or intensification has occurred, and we have a much better concept of how the convection will evolve. Yesterday was an excellent example of how the afternoon updates can improve upon the morning forecasts, once we get a sense of the evolution.

Tuesday, May 31, 2016

Data Driven

Well, we have arrived at the fifth and final week of SFE 2016. By this point in the experiment, the facilitators are mostly used to the rhythm of the testbed, knowing what observational data and model guidance we'll go over each day. By the end of the week, participants are generally used to the fast pace of the experiment as well. However, the first day of each week provides some reminders as to how much we're throwing at the participants. I thought that tonight, I'd provide a brief rundown of what we consider when making our full period outlooks each day, which run from 16Z of any given day to 12Z the following day.

Thursday, May 26, 2016

CLUE Comparisons

Each day, an evaluation takes place on the total severe desk to compare three subensembles of the CLUE. One subensemble contains 10 ARW members, one contains 10 NMMB members, and one contains 5 ARW and 5 NMMB members. Participants look at two fields to evaluate these subsets of the CLUE: the probability of reflectivity greater than 40 dBZ, and the updraft helicity. This week, all ensembles have been having trouble with grasping the complex convective scenario, as have most of the convection-allowing guidance. However, today's comparison highlighted the challenges of these evaluations: each model had different strengths at varying time periods throughout the forecast, but participants had to provide one summary rating for the entire run.

Tuesday, May 24, 2016

Model Solutions Galore

This week we've been experiencing broader risk areas in the Spring Forecasting Experiment than previous weeks. The instability has recovered across much of the Great Plains, and the persistent southwesterly flow at upper levels due to a trough in the west has sent steep lapse rates over a wide area. While the trough is still somewhat too far west for the greatest flow to coincide with a broad area of instability, deep-layer shear has been sufficient for severe storms to occur somewhere each day. Determining the exact location is a daily difficulty for our SFE participants.

When we have large areas to consider in conjunction with the huge amount of NWP data we have from the CLUE, deterministic CAMs, and operational large-scale guidance, the number of different scenarios can be overwhelming. Particularly this week, there are multiple solutions for how the day's weather could evolve, according to the NWP. Mesoscale details from prior convection have also played a large role both today and yesterday in making our forecasts, and the variation in the CAM guidance reflects the reliance on those small-scale details. One member's outflow boundary is likely not in the same place as another's, and it's up to participants to determine which solution we think will verify. As an example of the different solutions we saw yesterday, here's a snapshot of five ensemble members whose configuration differs only in the microphysics scheme they're using. Observations are in the lower right hand panel:

Monday, May 23, 2016

Back to the "Basics"

Each day in the Spring Forecasting Experiment, before we consider any numerical weather prediction, participants hand analyze surface and upper air maps. While hand analysis is less common in the digital age, it's an important aspect of our daily routine. Why, you might ask? Well, what do you do when two model runs give you something like this:

Thursday, May 19, 2016

Verification (with Low Population)

The target area yesterday was very small, hugging the U.S.-Mexico border from the Big Bend region of Texas northward and westward to New Mexico. This area is sparsely populated, which becomes an issue when trying to verify forecasts of severe weather. The United States is far from the only country to have this problem - one participant gave a talk this week that mentioned how the area with the most severe weather in South America is also sparsely populated. When we're forecasting for an underpopulated area in the experiment we have to examine metrics other than Local Storm Reports (LSRs), particularly because the verification of yesterday's forecasts is the first activity each morning.