Taking the Earth’s Temperature

ACS Climate Science Toolkit | Energy Balance

Credit: David Simonds

This cartoon is a good representation of many of the problems involved in taking the temperature of the Earth. The most obvious problem, even without reference to the cartoon, is that there is not a single temperature for the Earth but an enormous number of local temperatures that span the range from the frozen Antarctic to the tropical jungles and arid deserts of Africa. How this array of local temperatures is combined to give a single measure of the planetary temperature is the difficult task that is briefly outlined here.

Among other problems associated with taking the Earth’s temperature are:

  • different instruments used by different observers
  • differences in locations from potential interferences such as heat sources
  • differences in height above the surface where the temperatures are taken
  • differences in when the readings are taken and which are recorded

Further complicating the issue is that there are many locations on the globe where there are no temperature measurements or where there have been only sporadic ones, such as over the oceans. Satellite measurements of atmospheric temperature beginning in 1979 have helped with coverage, particularly over the oceans, but the data are indirect measures based on black body radiation and require interpretation to convert to temperatures for comparison with data from direct thermometric instruments.

To take the Earth’s temperature and see how it is changing with time, three groups of scientists (two in the United States and another in the United Kingdom) analyze compilations of data from thousands of instrumental records over about the past 150 years. The groups are:

  • the National Aeronautics and Space Administration (NASA), Goddard Institute for Space Studies (GISS) at Columbia University, whose results are often labeled GISS or GISTEMP
  • the National Oceanic and Atmospheric Administration (NOAA), National Climatic Data Center (NCDC), whose results are usually labeled as NOAA or NCDC
  • a cooperative effort between the Hadley Centre for Climate Prediction and Research and the University of East Anglia’s Climatic Research Unit (CRU), whose results are usually labeled with some variation of HadCRU.

All three groups provide results for land and sea temperatures separately as well as for the entire land-sea globe. Recently, another analysis of land temperatures only from a University of California group, the Berkeley Earth Surface Temperature (BEST) study, was announced. All four analyses, shown superimposed in this figure, come to the same conclusion—the Earth’s land temperature has warmed by 0.9 °C in the past half century.

Source: Loren Cobb, “The Berkeley Earth Surface Temperature (BEST) Spatial Averaging and Interpolation Method,” UC-Denver Data Assimilation Seminar, October 2011. Data from each analysis is referred to the 1951-1980 mean.

Temperature anomalies

The “temperature” values in this plot and most of the other Earth surface temperature data you usually see are not absolute temperatures, but temperature anomalies. An anomaly is a difference from some reference value. For the data in the plot, the reference value is the mean of the individual values in the 1950 to 1980 time period. That is why the zero scale line passes through the values in this time period.

Temperature anomalies are used to characterize the temperature of the Earth because they can often better represent climatic conditions and changes than absolute temperatures. As an example, consider two weather stations, one in a valley and the other at a higher elevation on a nearby mountainside. Temperatures in the valley will generally be higher than those on the mountain. But, as the temperatures change with time of day or the seasons, the differences between the measured value at each site and the average value over a reference time period for each site will usually be highly correlated. Averaging these anomalies over a week (or month or year) within a relatively small area of the planetary surface gives an average temperature anomaly for that area.

Other time periods and lengths of time periods can be chosen as reference values, for example, the average over the entire 20th century for some purposes. Shorter time periods are problematic, because one or two more extreme values during the period can dominate the average and distort the reference value.

2011 annual average temperature anomalies plotted on a map of the Earth’s surface
Credit: NASA

The figure shows how the 2011 annual average temperature anomalies for the small areas are plotted on a map of the Earth’s surface to show regions that are warmer or cooler relative to the reference time period, 1951-1980. NOAA and HadCRU analyses are usually based on a grid of 5° latitude by 5° longitude cells (small areas). GISS analyses (mapped in the figure) divide the global surface into a grid of 8000 cells of approximately equal size. For cells where no thermometric data exists, the analyses fill in values using algorithms that interpolate from bordering cells with data, if possible.

You are probably aware that urban areas with their density of human activities, buildings, paved surfaces, etc., are warmer than surrounding less populous or rural areas. Since urbanization has increased over the last century, the extent of urban heat islands has also. Even with corrections, there is a potential for temperature records from these areas to introduce a warming bias in the data. Using satellite data to identify dark areas on Earth where there is little human activity and using only temperature measurements from these dark areas gives the same warming result as with all the data included. The full set of temperature data is not compromised by an urban heat island effect.

To get a single value for the average temperature anomaly of the Earth’s surface, the values in the grid cells are summed and the average calculated. The summation and averaging are weighted to account for the different areas of each grid cell (in the NOAA and HadCRU grids). They are also weighted to account for the uncertainty in the cell value. A cell in which there are many individual measurements is weighted more heavily than one with fewer, for example, and these weightings enter into the calculation of the uncertainty in the final average Earth surface temperature anomaly.

Although the results for the Earth’s temperature are usually given as anomalies, climate scientists have used the measured temperature data to calculate average temperatures. NOAA’s result for the 20th century average land-ocean Earth surface temperature is 13.9 °C. The 2011 NOAA surface temperature anomaly of 0.51 °C, gives an average Earth temperature of 14.4 °C for 2011. To find out more about how the Earth is warmed and the greenhouse gases that are responsible, see How Atmospheric Warming Works and Greenhouse Gases.

The boundaries between rectangular grid cells with different anomalies are not reflected in sharp angles over most of the map because the graphing program is designed to smooth the boundaries to provide a better visual representation of the gradations. The value at the upper right-hand corner of the figure is the average Earth surface temperature anomaly, +0.50 °C, for the GISS 2011 analysis. The grey regions on the map are cells where there are no data.