Each day, an evaluation takes place on the total severe desk to compare three subensembles of the CLUE. One subensemble contains 10 ARW members, one contains 10 NMMB members, and one contains 5 ARW and 5 NMMB members. Participants look at two fields to evaluate these subsets of the CLUE: the probability of reflectivity greater than 40 dBZ, and the updraft helicity. This week, all ensembles have been having trouble with grasping the complex convective scenario, as have most of the convection-allowing guidance. However, today's comparison highlighted the challenges of these evaluations: each model had different strengths at varying time periods throughout the forecast, but participants had to provide one summary rating for the entire run.
Extending the conversation about real-time high-resolution convection-allowing modeling.
Thursday, May 26, 2016
Tuesday, May 24, 2016
Model Solutions Galore
This week we've been experiencing broader risk areas in the Spring Forecasting Experiment than previous weeks. The instability has recovered across much of the Great Plains, and the persistent southwesterly flow at upper levels due to a trough in the west has sent steep lapse rates over a wide area. While the trough is still somewhat too far west for the greatest flow to coincide with a broad area of instability, deep-layer shear has been sufficient for severe storms to occur somewhere each day. Determining the exact location is a daily difficulty for our SFE participants.
When we have large areas to consider in conjunction with the huge amount of NWP data we have from the CLUE, deterministic CAMs, and operational large-scale guidance, the number of different scenarios can be overwhelming. Particularly this week, there are multiple solutions for how the day's weather could evolve, according to the NWP. Mesoscale details from prior convection have also played a large role both today and yesterday in making our forecasts, and the variation in the CAM guidance reflects the reliance on those small-scale details. One member's outflow boundary is likely not in the same place as another's, and it's up to participants to determine which solution we think will verify. As an example of the different solutions we saw yesterday, here's a snapshot of five ensemble members whose configuration differs only in the microphysics scheme they're using. Observations are in the lower right hand panel:
When we have large areas to consider in conjunction with the huge amount of NWP data we have from the CLUE, deterministic CAMs, and operational large-scale guidance, the number of different scenarios can be overwhelming. Particularly this week, there are multiple solutions for how the day's weather could evolve, according to the NWP. Mesoscale details from prior convection have also played a large role both today and yesterday in making our forecasts, and the variation in the CAM guidance reflects the reliance on those small-scale details. One member's outflow boundary is likely not in the same place as another's, and it's up to participants to determine which solution we think will verify. As an example of the different solutions we saw yesterday, here's a snapshot of five ensemble members whose configuration differs only in the microphysics scheme they're using. Observations are in the lower right hand panel:
Monday, May 23, 2016
Back to the "Basics"
Each day in the Spring Forecasting Experiment, before we consider any numerical weather prediction, participants hand analyze surface and upper air maps. While hand analysis is less common in the digital age, it's an important aspect of our daily routine. Why, you might ask? Well, what do you do when two model runs give you something like this:
Subscribe to:
Posts (Atom)