Despite a century of technological advances, weather forecasts struggle to see beyond 2 weeks. Climate expert Ted Shepherd says chaos theory can explain why.
In October 1987, a savage storm achieving speeds of over 100 km/h raced across parts of southern England, claiming 18 lives and uprooting some 15 million trees. The event has lodged itself in British collective memory, in part because weather forecasters apparently failed to predict the event only the night before.
“At the time, there was probably a little too much confidence in individual weather forecasts,” says Ted Shepherd, Grantham chair in Climate Science at the University of Reading in the United Kingdom. “Weather centres would now never trust just one set of forecasts. There is better connectivity and data sharing.”
Technological advances – such as data collected by satellites, buoys and ships; faster computing and machine learning; and a better understanding of atmospheric physics and dynamics – have all led to more accurate predictions. But Shepherd would stress that a change in approach has also been key to revolutionising weather forecasting.
“Forecasts are now an ensemble of data that attempt to capture uncertainty,” he explains. “This means that if the data from only one forecast system shows a storm coming, then you can present this potentially high-impact event as a low probability. It is a question of communication.”
Shepherd also notes that the practice of naming storms encourages people to take severe weather events seriously. “They can more easily visualise these storms,” he adds. “And when we explain to people why things are happening, they are much more prepared to act.”Given these technological and methodological advances then, how far ahead might we be able to predict the weather? Could we soon be able to predict with a degree of certainty that a storm of a specific magnitude will occur in a specific location, months ahead of time?
Shepherd says the answer to this question was probably arrived at over 50 years ago. “The chaos theorist Edward Lorenz wrote a famous paper in 1969 that established a theoretical upper limit of 2 weeks for weather predictions,” he explains. “I think that this estimate has held up pretty well.”
The reason for this is the butterfly effect: the conceptual idea that the path of a tornado, say, might be influenced by the distant flapping of a butterfly’s wings. Minute differences in initial conditions can yield wildly diverging outcomes. This is why forecasts get less accurate the further ahead we look, and why accurately predicting weather so far into the future would be incredibly risky.
Shepherd also points out that such weather predictions wouldn’t be particularly useful. “Weather is classically about what is happening where at a given time,” he remarks. “Will it rain tomorrow, or is it likely to rain next week?” The further ahead one looks, the less precise the weather information generally needs to be, i.e. it’s enough to know approximately when it will occur.
These are the sorts of questions that might affect your decision-making. Anything more long term, and predictions will begin to look more like climate forecasting, i.e. identifying prevailing trends and how those change from year to year. “This point where weather becomes climate is really the cutting edge of the field,” adds Shepherd. “The EU-funded CausalBoost project for example occupies this middle space, as it focused on a sub-seasonal timescale.”
This project, which sought to strengthen rain forecasts for the Mediterranean region using machine learning techniques, was carried out by Marie Skłodowska-Curie researcher Marlene Kretschmer together with Shepherd and colleagues elsewhere in Europe. By analysing a range of data sets with machine learning algorithms, they were able to achieve an improved understanding of the drivers of Mediterranean precipitation. This could help farmers to plan for dry seasons ahead.
Click here to find out more about Kretschmer and Shepherd’s research: Improving Mediterranean rainfall predictions.