Oh, Computer Models

The weatherman’s most handy tool is the computer model. The supercomputers of a model crunch multitudes of data and input them into differential equations to make predictions of atmospheric parameters for future times (i.e. tomorrow, 2 days later, 3 days later, etc.). These predictions are mapped out, and the resultant outputted maps are interpreted by the weatherman to make his forecast. Computer models update every 6-12 hours, giving the weatherman a fresh set of maps to interpret every day.

Weather models aren’t perfect, and as a result, the weathermen “never get it right”. The differential equations are nonlinear and chaotic – the slightest error in an initial condition can make a world a difference a few days out. This is why forecasts tend to get worse for longer time ranges. Even worse, the inputs into the equations are dependent on the observational data collected by satellites, weather stations, and weather balloons. No observational network will ever be perfect nor complete in its breadth. Some areas are observationally spotty, so spatial extrapolation of the data that does exist must be done, compromising the accuracy of the input. And finally, some localized atmospheric processes, such as thunderstorm activity, are extremely difficult to input precisely, so these are rounded to an extent. So as a result, weather models tend to do a lot of changing and flip-flopping from day to day.

Different models use different algorithms with different physics, different rounding schemes, and different ways to interpret data to produce solutions. Despite the challenges posed to modelers, computer models have become increasingly accurate in the past decade. But some are still better than others, and two of the most accurate computer models used today are the GFS (made by researchers in the U.S.) and the ECMWF (made by researchers in Europe).

There’s a saying in the weather community – those so-called “weenies” who love extreme weather – that the ECMWF is a “Dr. No”. Whenever there’s a big weather event on the horizon, the ECMWF always seems to output a more moderate solution. And the worst thing is – the ECMWF is always right! There’s two possible reasons for this:

1) Most natural processes, even weather events, follow a Gaussian “bell-curve” distribution. The more moderate solutions, closer to the “mean”, are always going to have a higher probability of coming to fruition than the extreme solutions.

2) The ECMWF holds a slight edge over the GFS because the former uses an initialization scheme known as 4DVAR. 4DVAR gives the ECMWF an advantage in data-sparse locations, such as the Pacific Ocean. Since the prevailing jet stream and storm track in the U.S. goes from west to east, having a better picture of the storm in the Pacific today entails a better picture of the storm in California tomorrow, and a better picture of the storm in the East Coast down the road.

The upshot? The GFS showed days of heavy rain here in Berkeley and an eventual severe weather outbreak in the Southeast, both of which I enjoy tracking. The ECMWF showed a couple of systems with significant breaks and much less rain, and no severe weather, serving as Dr. No.

Today, one of the models changed sides. Guess which one it was?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s