Wednesday, June 01, 2016

Chopping the FAR

Afternoons in the SFE are composed of three main parts: A Day 2 forecast, evaluations of various aspects of the CAMs, and updates to the morning forecasts. Sometimes, very little new information contributes to these updates, particularly if convection has not initiated by the time of the update. Other days, convective initiation or intensification has occurred, and we have a much better concept of how the convection will evolve. Yesterday was an excellent example of how the afternoon updates can improve upon the morning forecasts, once we get a sense of the evolution.

Tuesday, May 31, 2016

Data Driven

Well, we have arrived at the fifth and final week of SFE 2016. By this point in the experiment, the facilitators are mostly used to the rhythm of the testbed, knowing what observational data and model guidance we'll go over each day. By the end of the week, participants are generally used to the fast pace of the experiment as well. However, the first day of each week provides some reminders as to how much we're throwing at the participants. I thought that tonight, I'd provide a brief rundown of what we consider when making our full period outlooks each day, which run from 16Z of any given day to 12Z the following day.

Thursday, May 26, 2016

CLUE Comparisons

Each day, an evaluation takes place on the total severe desk to compare three subensembles of the CLUE. One subensemble contains 10 ARW members, one contains 10 NMMB members, and one contains 5 ARW and 5 NMMB members. Participants look at two fields to evaluate these subsets of the CLUE: the probability of reflectivity greater than 40 dBZ, and the updraft helicity. This week, all ensembles have been having trouble with grasping the complex convective scenario, as have most of the convection-allowing guidance. However, today's comparison highlighted the challenges of these evaluations: each model had different strengths at varying time periods throughout the forecast, but participants had to provide one summary rating for the entire run.

Tuesday, May 24, 2016

Model Solutions Galore

This week we've been experiencing broader risk areas in the Spring Forecasting Experiment than previous weeks. The instability has recovered across much of the Great Plains, and the persistent southwesterly flow at upper levels due to a trough in the west has sent steep lapse rates over a wide area. While the trough is still somewhat too far west for the greatest flow to coincide with a broad area of instability, deep-layer shear has been sufficient for severe storms to occur somewhere each day. Determining the exact location is a daily difficulty for our SFE participants.

When we have large areas to consider in conjunction with the huge amount of NWP data we have from the CLUE, deterministic CAMs, and operational large-scale guidance, the number of different scenarios can be overwhelming. Particularly this week, there are multiple solutions for how the day's weather could evolve, according to the NWP. Mesoscale details from prior convection have also played a large role both today and yesterday in making our forecasts, and the variation in the CAM guidance reflects the reliance on those small-scale details. One member's outflow boundary is likely not in the same place as another's, and it's up to participants to determine which solution we think will verify. As an example of the different solutions we saw yesterday, here's a snapshot of five ensemble members whose configuration differs only in the microphysics scheme they're using. Observations are in the lower right hand panel:

Monday, May 23, 2016

Back to the "Basics"

Each day in the Spring Forecasting Experiment, before we consider any numerical weather prediction, participants hand analyze surface and upper air maps. While hand analysis is less common in the digital age, it's an important aspect of our daily routine. Why, you might ask? Well, what do you do when two model runs give you something like this:


Thursday, May 19, 2016

Verification (with Low Population)

The target area yesterday was very small, hugging the U.S.-Mexico border from the Big Bend region of Texas northward and westward to New Mexico. This area is sparsely populated, which becomes an issue when trying to verify forecasts of severe weather. The United States is far from the only country to have this problem - one participant gave a talk this week that mentioned how the area with the most severe weather in South America is also sparsely populated. When we're forecasting for an underpopulated area in the experiment we have to examine metrics other than Local Storm Reports (LSRs), particularly because the verification of yesterday's forecasts is the first activity each morning.

Wednesday, May 18, 2016

High-Resolution Verification Challenges

Afternoons at the Spring Forecasting Experiment involve some forecasting activities, but much of the time is spent doing evaluations. These differ between the two desks and from year to year, as we subjectively verify cutting-edge diagnostics and model updates. Specifics on this years' verifications will be the feature of a future blog post. Inevitably, during the evaluations aspects of both the products we're evaluating and the evaluations themselves are discussed, and even the facilitators don't always have easy answers to the questions participants raise. But that's part of the informal atmosphere (pun somewhat intended) that leads to better understanding between forecasters and researchers, and allows for the communities to brainstorm improvements. Today, such a question came up on the total severe desk: how do we verify high-resolution forecasts when we don't have high enough resolution in our observations?