August 15, 2011

Climate Models Not So Good For Crop Prediction

Filed under: Adaptation, Plants

Many global warming alarmists tout the notion that anthropogenic global warming will result in widespread crop failures as (projected) climate changes increasingly lead to increasingly bad growing conditions (see our article Science Fiction Down on the Farm, for some examples).

Using Al Gore’s lingo, we are quick to call “BS” on that premise, for the simple fact that that is not how things work. Crop scientists and farmers have an economic incentive to improve genetic cultivars and agricultural practices to maximize output given the prevailing environmental conditions. And, they are pretty effective at what they do. Despite the “global warming” and other affiliated and/or non-affiliated climate changes that have occurred over the past 100 years, global crop production just keeps on increasing—see our recent coverage here of this very good news.

We are clearly and demonstrably able to change agricultural practices to keep up with changing climate while increasing yields.

So much for the “dumb farmer scenario” that farmers stand by and watch their crops fail as conditions change.

But what about those future climate changes that underlie the scare scenarios? Are climate models really able to the climatic factors that are important for agriculture?

A new soon-to-be-published study finds that the models are not so hot, at least over the world’s most productive agro-region, the good-old-US of A. As we shall see, though, the pressures to say the politically correct thing still comes beaming through from the halls of Academia.

A research team led by Adam Terando of Penn State University compared the observed temporal and spatial patterns of three agriculturally important climate indices across North America with the patterns predicted by a collection of climate models.

The three “agro-climate” indices examined by Terando and colleagues were the annual number of frost days (days when the minimum temperature dropped below freezing), the “thermal time” (the amount of time during the growing season that the temperature is within the optimal growth range for a particular crop (in this case corn)), and the “heat stress index” (the amount of time during the growing season that the temperature exceeds a threshold value above which negative impacts to crop production can occur). To assess patterns of change in these agro-climate indices across North America, the indices were compared over two time periods, 1951-1980 and 1981-2010 (Figure 1).

Over this time period which covers the last 60 years, in general, the number of frost days declined (a positive climate change for crops), the “thermal time” increased (another positive change), and heat stress index markedly increased across the southern and western U.S. (where little corn is grown), but decreased in the Midwest and Southeast, which together are the Saudi Arabia of corn.


Figure 1. Agro-climate indices trends for 1951 – 1980 (left column) and 1981 – 2010 (right column). Panels a) and b): frost days; panels c) and d): thermal time; panels e) and f): heat stress index. Filled circles represent individual stations with colors representing either a warming (red) or cooling (blue) trend. Stations with trends between -0.5 and 0.5 days per year for frost days and -2.5 and 2.5 degree days per year for thermal time and heat stress index are not displayed. Background gridded values are interpolated trends from the weather station data with color bars on the right panels in the same units as the station trends (source of images and text: Terando et al., 2011).

Terando and colleagues then assessed how well a collection of 17 different climate models did at reproducing these observed changes. They found that the climate models did pretty good at capturing the decline in the number of frost days, and fared a bit worse, but still were largely acceptable in capturing the changes in thermal time, but showed little skill at all in reproducing the observed changes in the heat stress index—with the primary error being that the models predicted a far greater increase in heat stress than actually occurred (that is, the models predicted the heat index to get much worse than it actually did).

The authors summed their findings up this way:

GCM [climate model] skill, defined as the ability to reproduce observed patterns (i.e. correlation and error) and variability, is highest for frost days and lowest for heat stress patterns. …The lack of agreement between simulated and observed heat stress is relatively robust with respect to how the heuristic is defined and appears to reflect a weakness in the ability of this last generation of GCMs to reproduce this impact-relevant aspect of the climate system. However, it remains a question for future work as to whether the discrepancies between observed and simulated trends primarily reflect fundamental errors in model physics or an incomplete treatment of relevant regional climate forcings. [emphasis added]

So what we have is a set of observations that shows that the climate changes that have occurred across much of North America over the past 60 years have been a net benefit to agriculture. Throw in technological changes and a healthy dose of carbon dioxide fertilizer, and the net result is a spectacular increase in crop yields. All the while, climate models would have led us to believe that a fair degree of climate deterioration was going to take place—a change that would have negatively affected crop production.

This does not bode well for climate model-based predictions of what agro-climate changes are to come in the future, a failure which feeds back into the reliability of the apocalyptic proclamations of future crop failures which themselves are already built upon the demonstrably false “dumb farmer” premise. We wonder just how many strikes it takes before a silly concept can be thrown out!

Terando and colleagues are not so quick to agree with us—instead, they warn the reader that their results only apply to the “anomalous conditions seen in North America” and thus “should not be extrapolated to other areas as an indication of how a warming world will impact agriculture.”

Hmmm, they test the models over one the most productive agricultural region in the world and find that they don’t work so well, but still want us to have faith that prognostications based upon the model projections for other parts of the world bear some semblance of reality?

Wonder what phrase Al Gore would use to describe that notion? We agree. And, furthermore, we harrumph that the authors probably had to put the usual global warming clinker in there to keep the powers that be (their bosses) and the powers that let them be (their federal funders) happy.

Reference:

Terando, A., W.E. Easterling, K. Keller, and D. R. Easterling, 2011. Observed and modeled 20th century spatial and temporal patterns of slected agro-climate indices in North America. Journal of Climate, in press.




No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress