“When do Weather Forecast Models Misbehave?”

by Brooke Hagenhoff

Meteorologists use weather models to predict events such as thunderstorms. These models can be thought of as large, three-dimensional grids on which equations for temperature, pressure, humidity, among other things, are solved. In the past, these grids were too large in size to simulate individual thunderstorms. However, computers have become fast enough to run these models with smaller grid spacing that allows for individual or clusters of thunderstorms to be simulated. This allows for more detailed output, such as precipitation. Unfortunately, these models are not perfect–even the high-resolution output has errors–and it is unknown how these errors are related to prevailing weather patterns.

As part of our Center for Regional Climate Science research, I am working to understand these errors. To narrow the focus, days with precipitation were selected. In meteorological research, these days are called “cases.” An advanced classification technique called a Self-Organizing Map (or SOM) then matches the atmospheric patterns of the cases to specific categories, such as surface pressure, moisture, and upper atmosphere wind patterns. The errors that the model makes when predicting the amount of precipitation for a case can then be investigated based on the atmospheric category, or classification, for that case.

Below is an example of this methodology for the Northern Plains region. Figure 1a shows a weather map with surface pressure and moisture, while Figure 1b shows the upper atmosphere wind patterns. These maps can be compared to each other directly, so if a case were matched to the very first surface pressure atmospheric pattern (row 1, column 1) in Figure 1a, then that case would also be matched to the very first upper air atmospheric pattern (row 1, column 1) in Figure 1b. These plots represent the surface and upper air atmospheric patterns for the specific day (case) that was selected.

Biases, or errors in the precipitation amount predicted by the model, for these patterns are calculated in Figure 1c, and it can be seen that the bias varies for the different atmospheric patterns shown in the previous two figures. Again, Figure 1c can be directly compared to Figures 1a and b, as described above. A case that was classified to the first atmospheric patterns (row 1, column1) in the previous plots would have an average bias given by the very first block (row 1, column 1) in Figure 1c. Here, a positive precipitation bias means that the model predicted more precipitation than what actually occurred, while a negative precipitation bias means that the model did not predict enough precipitation. Precipitation biases are overwhelmingly positive for most of the atmospheric patterns represented; in other words, the model produces too much rainfall most of the time. However, atmospheric patterns that are known to primarily occur at night contribute to the few cases that have negative biases (left side of figure 1c). This tells us that the model is over-producing precipitation for patterns with high moisture, and under-producing precipitation for patterns that are more likely to cause precipitation overnight.


Figure 1a (above): Lower level atmospheric patterns. The pressure is indicated by the black dashed lines and relative humidity (moisture) is shown in color. Blues indicate dry air and yellows and oranges indicate moist air.


Figure 1b (above): Upper level atmospheric patterns indicating how the air (wind) is flowing high above the surface of the Earth.


Figure 1c (above): Precipitation bias. Color bar and numbers indicate the bias value for each atmospheric pattern from figures A and B.Separating the cases by the atmospheric pattern, as I am doing in my research, is useful, first, because determining which patterns the model struggles with helps pinpoint the reasons for the model errors, which then may lead to improvements in future models. Second, identifying the expected bias amounts for specific weather patterns will lead to improved forecasts because forecasters will not only have access to better forecast models, but they will also be able to understand what the model error should be for the atmospheric patterns.


Brooke Hagenhoff recently won first prize for her oral presentation, “A Regime Based Climatological Assessment of WRF Simulated Convection and Associated Precipitation” at the 7th Transition of Research to Operations (7R2O) Conference at the 2017 American Meteorological Society Annual meeting and was a finalist at UND’s Graduate Research Achievement Day Poster Session (Natural Science category).