[Note: This is the sixth of a series of mini-editorials on the junk science influencing the global warming issue. Other topics will include the UN Environmental Program, and some individuals heavily involved in these matters.]
We discuss here in some detail the way in which warming trends were introduced into the IPCC Reportwhen in fact they did not exist or were extremely small. We focus on the period 1979 to 1997. There was cooling up to 1976; in 1998 there was a super-El-Nino and no subsequent warming. Our discussion is in three parts: (1) a bottoms-up approach; (2) the top-down approach; and next week I shall discuss (3) the treatment of sea surface temperatures (SST).
Bottoms-Up Distortion of Temperature Data
The Climate Research Unit of the University of East Anglia (CRU-UEA), under the direction of Dr. Philip Jones, collected data from weather stations from around the world. These are almost all land-based stations, showing a high concentration in the United States and Western Europe and a lower concentration elsewherewith many parts of the globe hardly covered by reliable stations.
There are a variety of problems with such data, and the investigators were aware of most of them. Many stations produce useless data, either because of inadequate maintenance, or because of their location. Anthony Watts (in his WUWT blog) has shown that even stations in the USA were badly placed and subject to local warming influences that were not adequately corrected.
The surface of the earth is then divided into grid boxes, usually five degrees by five degrees. When there are several stations in a grid box, the investigators would choose those they considered most reliablewhich in many cases meant urban stations, or stations at airports, that are well maintained. However, because of their location, they generally are subject to urban heat-island (UHI) effects, a local warming that increases with population and urban growth over time and suggests a temperature trend of a global nature. The investigators tried various ways to eliminate such local UHI trends, but were not very successful.
The problem was greatly exacerbated by the closing of over half the worlds weather stations between 1970 and 2000 (see NIPCC Summary, Fig 12which in most cases removed rural stations but also stations from higher latitudes and altitudes that tended to show a lower warming trend or no warming trend at all. It should be obvious therefore that this drastic change in the sampling population would introduce a fictitious warming trend which is an artifact of the change. E. Michael Smith and Joseph DAleo have documented in some detail how such artificial temperature trends could be produced even when there was no global trend.
The Top-Down (TD) Approach
In many ways, the Top-Down (TD) approach to derive the Global Mean Surface Temperature (GMST) is to be preferred over bottom-up (deriving GMST by collecting data from weather stations and sea surface readings). The TD approach relies primarily on the data from weather satellites, the only truly global measuring system, using a single microwave sounding (MSU) instrument and therefore independent of the vagaries of individual weather stations and their thermometers.
There are of course certain disadvantages: The MSU cannot measure temperatures at different levels of the atmosphere but derives instead a weighted mean of the vertical temperature profile; the times of observation are fixed by the orbit of the satellite; a change of satellite, and MSU instrument, requires an overlap in operating time to permit a recalibration. Nevertheless, by comparing different view angles, one can change the weight factors and obtain a temperature value for Lower Troposphere. The University of Alabama, Huntsville (UAH) group has shown good agreement of UAH results with those of radiosondes from weather balloons.
As early as 1997, I noticed a disparity between temperature trends of satellites and surface trends, esp. in the tropics. (See Fig 9 in Hot Talk, Cold Science, 1997) The troposphere trends (between 1979 and 1995) were close to zero or even slightly negative, while surface trends showed a warming of about 0.05 deg per decade. This disparity is just the reverse of what one would expect from GH models [see IPCC-SAR]namely a positive (warming) troposphere trend up to twice as large as the surface trend.
In addition, I noticed that the proxy data to which I had access showed no surface warming (tree-ring data of Jacoby et al (Fig 16 in HTCS) and ice core data of Dahl-Jensen et al]. I tried very hard to obtain more proxy data but was not successful. For example, I noticed that Michael Manns infamous hockeystick graph did not extend beyond 1979 and suspected that his proxy temperatures diverged from the instrumented surface results. Yet when I wrote to Mann about post-1980 proxy data, I received only a brusque negative reply. Thanks to Climategate we now know, what I had then suspected, i.e., that Mann and Jones were engaged in a scheme to hide the decline [in post-1979 proxy temperatures].
To sum up: Both the satellite results and the proxy data tell us that the claimed rise of surface temperature between 1979 and 1997, shown by IPCC, is probably much smaller or even non-existent.
|Atmospheric physicist S. Fred Singer is a Research Fellow at the Independent Institute, Professor Emeritus of Environmental Sciences at the University of Virginia, and former founding Director of the U.S. Weather Satellite Service. He is author of Hot Talk, Cold Science: Global Warmings Unfinished Debate (The Independent Institute).|
Distinguished astrophysicist S. Fred Singer explores the inaccuracies in historical climate data, the limitations of attempting to computer climate models, solar variability, the effects of clouds, ocean currents, and sea levels on global climate, and factors that could mitigate any human impacts on world climate.