Friday, May 12, 2017

CAM Guidance in a Mixed-Mode Case

Yesterday, 11 May 2017, gave the participants in SFE 2017 many things to consider. A potent upper-level low pressure system was finally evolving eastward, after giving the Experiment interesting weather to forecast all week while sitting over the southwest. As the experiment began, ongoing elevated convection was already producing reports over northeastern Oklahoma, and the participants were eyeing the chance for some severe weather locally.


By 2000 UTC (3PM CDT, near the end of the SFE's daily activities), cellular convection was initiating all across northern Oklahoma, northern and western Arkansas, and northeast Texas. Many of these storms quickly began to rotate.


Wednesday, May 10, 2017

The Denver Hailstorm, 8 May 2017

If you have an interest in severe and unusual weather, you probably already know all about the hailstorm that struck Denver on Monday afternoon, shattering windows and damaging vehicles and roofs across the metro. Indeed, it made for quite the exciting Monday in the Spring Forecasting Experiment.

During the morning forecast discussion, participants noted that good forcing was present over Colorado, along with dewpoints considered sufficient for severe convection by Colorado standards (in the 50's). The moisture was modified Gulf moisture, arriving in CO by way of the Rio Grande thanks to the surface front that was the focus of most of last week's severe convection. Also noted was the unidirectional shear, as can be seen on this 1200 UTC (7:00AM CDT) hodograph from Albuquerque, which was upstream of Denver at 250mb and 500mb.

Sunday, May 07, 2017

Verification Determination

Verification is a huge part of the Spring Forecasting Experiment. Each day, we make multiple forecasts on different time scales (this year ranging from daylong outlooks to hourly probabilistic forecasts), and the first activity participants undertake on Tuesday-Friday is an evaluation of the previous day's forecasts. Additionally, in the afternoon, participants evaluate numerical guidance, by comparing model output to observations.

Selecting how to use observations for verifying some of the more nebulous aspects of severe convective weather is one of the challenges of designing the SFE. With some fields, it is easy enough to compare the simulated with the observed - take reflectivity, for example:


Wednesday, May 03, 2017

Snow Forecasting Experiment??

Strange considerations can crop up in the SFE. In previous years we have forecasted in areas of low radar coverage such as the mountain west, determined which side of the U.S.-Mexico border a storm would form on, and dealt with the severity of convection coming onshore from the Gulf. However, remnants of last weekend's storm threw a highly unusual wrinkle in the forecast....



Sunday, April 30, 2017

CLUEing in on Spring Forecasting Experiment 2017

It's nearly the beginning of May (even if it doesn't feel like it in Norman, OK, with a current windchill of 38°F!) and that means that another Spring Forecasting Experiment is about to be underway. This year the Community Leveraged Unified Ensemble (CLUE) is an even more vast than last year, comprised of 81 members from organizations such as NSSL, CAPS, OU, NOAA's Earth Systems Research Laboratory/Global Systems Division (ESRL/GSD), NCAR, and GFDL. These members will provide forecasts of 36 h to 60 h in length, depending on the subsets of the ensemble being considered.


Tuesday, February 07, 2017

The SFE at the American Meteorological Society's Annual Meeting

During the week before last, over 4500 meteorologists convened in Seattle, Washington for the 97th American Meteorological Society (AMS) annual meeting. As always, I left this meeting with a plethora of new ideas, enthusiasm for the field, and at least a dozen papers added to my to-read pile. However, I also noticed a number of talks which mentioned the Spring Forecasting Experiment, including results from past experiments and hints of what's to come in SFE 2017.

A view of Puget Sound from the Washington State Convention Center, home of the 2017 AMS Annual Meeting

Friday, December 02, 2016

A Late November Outbreak

Greetings from the off-season!

While SFE 2017 (!) is a ways off yet, preparations are already underway for many of the collaborators that provide products to the experiment. Development of the ensembles and guidance tested in the SFEs often occurs across a number of years, as tweaks suggested by prior experiments are implemented alongside new product development.

For example, in SFE 2015 four sets of tornado probabilities were evaluated. While all of the probabilities used 2-5 km updraft helicity (UH) from the NSSL-WRF ensemble, they differed in the environmental criteria used to filter the UH (i.e., if a simulated storm from a member was moving into an unfavorable environment, it was less likely to form a tornado and therefore the ensemble probabilities were lowered). These probabilities showed an overforecasting bias in the seasonally aggregated statistics, and the bias was consequential enough to be noted in subjective participant evaluations. The most typical rating for the probabilities was a 5 or 6 on a scale of 1-10, leaving much room for improvement.

To improve these tornado probabilities, a set of climatological tornado frequencies given a right-moving supercell and a significant tornado parameter (STP) value, as calculated by forecasters at the SPC, were brought to bear on the problem. The application of the climatological frequencies grounded the probabilities in reality. For example, in the prior probabilities if 6/10 ensemble members had a simulated storm passing over the same spot, the forecast probability would be 60%. The updated probabilities consider the magnitude of the STP in the environment the storm is moving into in each member. For example, if all of the storms were moving into an environment with an STP of 2.0, each member is assigned the climatological frequency of a storm to produce a tornado in that situation as the probability of a tornado. Then, the probabilities are averaged across each member. Assuming that 6/10 members now have the storm moving into an environment with STP = 2, the probability would be 60% * the climatological frequency of a tornado given STP = 2. This approach lowers the probabilities, and thus reduces overforecasting.

The new set of probabilities will be tested in SFE 2017. However, these probabilities have been worked on for over a year, and are already available daily on the NSSL-WRF ensemble's website.

While the statistics for all of the tornado probabilities discussed herein were aggregated over the peak of tornado season (i.e., April-June), the end of November 2016 brought tornadoes to the southeastern United States, and with them, the chance to test the new probabilities. We'll focus specifically on 29 November 2016, a day that saw 44 filtered tornado local storm reports (LSRs):


The Storm Prediction Center had a good handle on this scenario, showcasing the potential for severe weather across some of the affected region four days in advance. At 0600 UTC on the day of the event, their "enhanced" area covered much of the hardest-hit areas, with the axis of the outlook a bit skewed from the axis of the LSRs. The outlook and LSRs are shown below.


The 0600 UTC outlook is shown here, because that is when the probabilities computed above become available - our hope is that someday forecasters can look at these probabilities as a "first-guess", encompassing multiple severe storm parameters from the ensemble into one graphic. The SPC's probabilistic tornado forecast from 0600 UTC encompassed all of the tornado reports, but was a bit too far west initially. Ideally, the ensemble tornado forecasts would resemble the SPC's forecast:


When we consider the UH-based probabilities, there's a pocket of high probabilities, between 25-30%, in an area that is close to the highest density of tornado reports. Additionally, all of the reports are not encompassed by the probabilities, and there is an extraneous blob of 5% risk over the DC/Maryland area. The 10% corridor of the probabilities extends further north than the SPC's, but overall, this was a decent forecast, if a bit high in that "bulls-eye" of probabilities. 
Let's compare this to the STP-based probabilities:
These probabilities have a much lower magnitude, but still encompass most of the tornado reports within the 10% contour. The 2% contour is also extended westward into Louisiana, capturing the tornado report that the prior probabilities missed. Overall, this forecast is more like the SPC's outlook, and better reflects what happened on the 29th.

Will we see the same trends into the spring? Aggregated seasonal statistics from spring 2014-2015 seem to suggest yes. However, the opportunity to get participant reflection and evaluation on these probabilities and this methodology awaits - and I, for one, am excited to see what new insights our participants will bring.