Tuesday, June 12, 2012

Leveraging the known

I left off the last post indicating that we know model biases, at least for the models that are coarser in grid spacing. But that is a lie.

Thursday, June 07, 2012

Metrics: Pick your number

How exactly do you choose your favorite model?

Because that is the model you fall back to when uncertainty is large. The model you use when a big event is forecast. The model you are most familiar with. The model you use to re-calibrate yourself when a big event is forecast.

Saturday, May 26, 2012

Verification

One of the many struggles with forecasting is verification, especially of "rare" events. In the severe storms world, we have storm reports. It has been shown that there are serious flaws with this database over time. They are conditional upon hitting something or someone; population density or highways. There are many areas un-accessible that may have had things like hail or high wind but yet there are no observations nearby. In the Plains this is a big challenge.

Over the course of the HWT EFP, we have debated over the so-called practically perfect methodology. At is core it is a Kernel Density Estimation (KDE) technique using a gaussian smoother (120 km) and a radius of influence technique (40 km) to map individual storm reports to a grid and produce probabilities of severe.

Given that storm reports are not the most ideal, independent, non-biased dataset out there we look to other more unbiased data. So what are the alternatives? Can we use severe storm warnings? How about data specifically from the radars like Max Estimated Hail Size or Rotation tracks? How about satellite derived data about vegetation?

Just about all of these data sets have their own problems. For the radars, we are observing rotation or hail aloft, not necessarily at the ground. This is still valuable information. But how do we switch from spotty storm reports to continuous tracks? Will the same KDE smoothing approaches be necessary?

For the warnings, it is clear that meteorology alone is not driving them. If there is a chance a storm could be severe over a highly populated area at a critical time, the edge goes to issuing warnings as opposed to not. This is not all that bad, since we would all like to err on the side of safety. Better to be safe than sorry.

Using  radar data we still have to verify that what the radar detects is actually occurring at the ground and that phenomena is as strong/large as indicated aloft. And that requires doing verification on the observations. The SHAVE folks at NSSL-OU are trying to do exactly that as are some other NWS associated folks, though at the moment their name escapes me.

Satellite data also offers some advantages on tracks of severe storms provided there is damage say from large hail stripping vegetation bare or tornadoes doing damage.  Collecting more fine resolution data is going to take a dedicated effort but in the end it helps build more complete knowledge about storms, more understanding about the successes and failures of the forecasts, and quite possibly will end up making better forecasts.

Friday, May 25, 2012

Thanks to TAMU for soundings

One of the observation components of this years EFP has been an intercomparison between the Vaisala RS92 and InterMet radiosondes to help verify the Microwave Radiometer that we (Dave Turner at NSSL) have on the roof of the National Weather Center. We were lucky to have Don Conley from Texas A & M bring his observations class on the road and visit the HWT conduct some local and mobile radiosonde intercomparisons. They did two mobile deployments: one in Concordia, KS on Wednesday and Altus, OK on Friday for helping to verify the models we are using for convection initiation.

They drove from Norman to Concordia and were able to make 3 launches (4 really), and 2 trips to Walmart (for helium and then to return said helium). Many thanks to the City of Concordia and the airport manager for allowing them to use a hangar for these balloon launches in very strong winds (which caused the failure of the very 1st balloon launch). They got to a great spot just east of 2 very long and robust Horizontal Convective Rolls both of which produced CI along the front-HCR intersection.

I haven't heard the stories from today, but I do know they got to Altus after lunch at Meers (for the Meers burger, obviously) and got off two launches again in an environment characterized by HCRs. These are great tests of the instruments, great experiences for the students, and excellent learning opportunities for the rest of us.

They (and you) should know that these soundings make their way to the SPC (something that is usually done upon request at TAMU) and prove valuable. These types of partnerships, sometimes ad hoc, but almost always mutually beneficial are what make the HWT a vibrant place for forecasting, research, research forecasting and forecasting research; and now with observations!

You can find the mobile and local soundings here.

Again, Thanks Don, Mike D., Mike C. and the whole Observations Class (Send me your names and we can make you famous* write them on here!).

*Fame not guaranteed.

Thursday, May 17, 2012

Severe winds

Well today the severe weather forecast area was from eastern CO and western KS. Although the hi-res models were producing some intense precipitation cores it was highly unlikely the atmosphere would follow suit. The deep dry boundary layer would be capable of supporting some serious downdrafts but without a lot of moisture, wet downbursts were unlikely. However, dry thunderstorms were the possible convective mode and all available evidence hinted at a line that would fizzle later in the evening.

Severe reports from KS showed up just prior to 0000 UTC right along with an OK mesonet wind gust to 58 mph. Now normally when you get a downdraft, it brings cooler air to the surface. In the case of a deep and dry boundary layer, there can be little in the way of temperature changes. This is exactly what occurred with the temperature going slightly up 4F during the big wind gust!
I went out on a limb and also proceded to forecast a 20 percent chance of heatbursts. A few models hinted at that possibility tonight (in the next few hours anyway). Verification in the morning!

Moisture bias?

So after much discussion today about moisture there is accumulating evidence. Here is a comparison between the MWR on the roof (1.6cm at 00 UTC 5/17 and 2cm at 00 UTC 5/18) , the IPW from the GPS system (gpsmet.noaa.gov), and the radiosonde launch from Norman.
GPS:
Here is the sounding:
Note the PW is 0.5" with a characteristic decrease immediately off the surface. Just after this time the dew point at the NWC increased from 53 to 55 in the hour after release.

Below is the MWR PW time series with the last vertical dotted line at the right being 0000 UTC on 5/18 (the beginning of the chart is 5/17 at 00 UTC):

The MWR agrees well with the GPS system and the MWR is fresh off a calibration. I have no idea what would cause this discrepancy with the radiosonde data but clearly something is amiss.

Let me back up to DVN on the 16th at 0000 UTC for a counter example:
And now the corresponding GPS data from Rock Island:
Note the close correspondence for a few balloon launches; specifically at 5/16 00 UTC! Yet that decrease above the surface looks suspect. It is possible the cloud layer above 700 mb is laying a role but what I don't think we can diagnose.

Where did all the moisture go?

The models stole it all!

One of the additions this year is an observational program (brought to you by NSSL) where we have an Microwave Radiometer (MWR) offering vertical profiles of moisture and temperature over the lowest 4km AND a new radiosonde intercomparison to go along with it (a Vasaila RS92 sonde and a new IMET sonde). The goal is to use the soundings to compare the MWR and thus offer a calibration data set, but also to see how well the moisture retrievals compare to actual, in clear-sky conditions.

Since we have had two sonde launches this week so far we can see how well the IMET and Vasiala measurements compare and also how the MWR compares to both of them. So far the results are that the two sondes are very close. The comparison with the MWR was at first horrible, until the MWR was re-calibrated. Sensor drift from one of the eight channels was to blame at least for the low level structure of the moisture (most noticeable in the RH field). for the next point we have to understand that the majority of the information content in the moisture channels is contained below 4km and only amounts to about (on average) 1.6 pieces of information. This means within the lowest 4 km we have at most 2 effective measurement points for moisture. So the vertical profiles tend to be smooth and more like an average which is why agreement aloft between moist layers and the MWR is essentially low.

OK has been pretty low on moisture the last couple weeks. We spent some time forecasting in the Iowa area the other day where model precipitable water was in excess of 1", but in reality was only around 0.5". This posed no problems for the models to generate storms along a cold front associated with an upper low. Some warnings resulted from MI arcing through IA that evening with a few wind and hail reports, but generally good CI and SVR forecasts.

At issue was this anomalous model moisture and if/how it would play a role in the forecast. So many models had storms though that it was hard to discount storms even in this lower moisture environment. Even looking back at the verifying soundings from DVN, ILX, DTX it was evident that the CAPS ensemble control member was 2-3 g/kg too moist through the depth of the boundary layer. How can we have errors that large and yet have some skill in the convective forecasts? The models simulated storms were a bit on the high side in terms of reflectivity, lasted a bit longer, were a bit larger, but still had similar enough evolution. I guess you could consider this a good thing, but this should really drive home the point that better, more dense moisture observations are needed.

We really need to see WHY these errors occur and diagnose what is contributing to them. In this case, the control member was an outlier in terms of the overall ensemble. Why? Was it the initial conditions, lateral boundary conditions, the perturbations applied to the ensemble members, or some combination thereof that set the stage for these differences in convection? Or was it the interplay between the various model physics and all previous factors? We will need to dig deep on this case to get any kind of reasonable, well constrained answer.

In order to address, at least partially, these issues we need observations of moisture within the PBL. In fact we could even benefit from knowing the boundary layer. It is at least plausible to retrieve that field and then derive the PBL moisture. Such is the goal of the MWR type of profiler: to derive the lowest layer moisture structure, addressing at least some of our issues. Regardless, these high resolution models REQUIRE observations to verify both the processes and statistics of these models in order to make improvements.