Skip to Main Content

PHYS 1080: ENERGY USE AND CLIMATE CHANGE: Chapter 8 - Modeling

PHYSICS 1080

Chapter 8

is a skill acquired during the 20th century, and the effectiveness of the predictions more or less parallels the advances in digital computing. The usual validation of weather predictions is done by experiencing the real weather and comparing it with the predictions. The time span for predictions is measured in days, although recent long- term predictions measured in weeks and months are becoming more common. In many aspects climate predictions are easier than weather predictions because they smooth over sharp local and spatial extreme events. However, because present political decisions are driven mostly through response to local events, there is a strong incentive to develop climate models accurate enough to predict localized extreme events. Both the weather and the climate systems are modeled based on the physical laws described in the previous chapters. This is complicated because we saw in the previous chapters that the atmosphere is coupled to the oceans and to the land biota. We illustrate this complexity again in Figure 8.1. To monitor the anthropogenic contributions that amount to a mere 3% of the natural carbon fluxes, we need to model the carbon and water cycles. But the biggest hurdle to modeling is probably not the modeling of the physical environment but the modeling of the human environment. One of the difficulties is cultural and educational— physical scientists investigate and model physical environments. Models are based on physical Radiation laws repeatedly tested under controlled laboratory conditions. Human behavior, however, individual or collective, is investigated by social scientists. The main tools that social scientists use are statistical correlations between measurable quantities. Here we are required to use both methods. It is not surprising that some of the harshest criticisms of the results of the future impact of anthropogenic atmospheric emissions come from economists and politicians. More about this later. As Figure 8.1 reminds us, land, sea, and the atmosphere are interconnected through various chemical and physical processes that take different times to equilibrate, ranging from processes that have practically instantaneous equilibration to processes that can take thousands of years, such as the equilibration between deep ocean water and the atmosphere. Th ese equilibration processes change with the temperature. Water evaporation can serve as a good example for some of the complexities of these interdependencies—this issue was discussed in some detail in the previous chapter. Other areas in which feedbacks play important roles include snow and ice coverage, vegetation coverage, and the various processes that take place in the carbon cycle. Most models include at least some aspects of the fast-feedback processes—but the uncertainties are significant. These uncertainties are compounded by the uncertainties in modeling human contributions to the emissions of GHGs because such modeling requires estimates of future social behavior, including future energy policy, price of fuel, changes in the standard of living, and changes in wealth distributions—not counting catastrophic events such as wars and natural calamities. The concept of climate sensitivity was introduced to separate the uncertainties involved in studying the feedback processes and uncertainties in estimating future human behavior. Climate sensitivity calculates the expected average global temperature increase for a given increase of the atmospheric concentrations of GHGs. A typical yardstick often used is the expected global average temperature increase as a result of a doubling of atmospheric carbon dioxide concentrations from preindustrial levels. A typical set of predictions from the IPCC IV report are shown in Figure 8.2. Figure 8.2 shows a bound area that indicates the uncertainty in the predictions and a line in the middle of the most likely prediction. The horizontal axis measures the GHG concentrations in units of CO2 equivalents (see Chapter 6 on radiative forcing). Taking the IPCC metric for climate sensitivity as measure of the equilibrium temperature due to doubling the GHG concentration from the preindustrial level of 280 ppmv, one obtains value of 3°C with an uncertainty of 2°C to 4.6°C. In principle these numbers should reflect pure physical measurements with much smaller uncertainty because one should be able to construct a relatively simple model such as a greenhouse, adjust the properties of the glass to reflect the desired radiative forcing, and measure the resulting temperature increase for various light intensities. Aft er all, Earth is a physical system that one could simulate. In principle, this should have nothing to do with ability to predict the future because all the uncertainty in predicting the future is anchored on the prediction of the atmospheric concentrations of the GHGs. Th e main uncertainty in the predictions of Figure 8.2 rests on our limited knowledge of a complicated greenhouse such as Earth. Two issues stand out. First, the temperature in Figure 8.2 is RAMIFICATIONS OF UNCERTAINTIES Emphasizing the uncertainties involved in our attempts to model the future was unavoidable until now. Uncertainties and margins of error tend to leave us skeptical, and the most obvious retreat in the face of such skepticism is to do nothing and hope everything will turn out OK. In spite of these uncertainties, the overall trend is inescapable: the continuing use of fossil fuels is changing the composition of the atmosphere, and this change leads to an average temperature increase that in turn will cause the sea level to rise. How much of a rise and when it will happen are subject to uncertainties. The time that most of the models address is hardly beyond our definition of “now ” (Chapter 2). Uncertainties are funny— we might be wrong in either direction. If we could be wrong, then it might be useful to take (and prepare for) the worst-case scenario. As far as humans are concerned, the worst case scenario is that the global temperature will rise to such a degree that all global ice and snow will melt. Table 8.1 shows the potential rise in sea level as a result of such a meltdown. The rest of the book tries to analyze the consequences of our role. Source: US Geological Service.7 M

Chapter 8

Modeling

S
cience dominated the first seven chapters. It is now time to start the transition to the human-induced aspects of global warming. Specifi cally, we want to address decision making. If we accept the premise that the data support an increasingly significant anthropogenic contribution to the composition of the atmosphere that affects the global climate, then the question becomes, what can we do to reestablish an atmospheric equilibrium we can live with? Many of the processes described span a multitude of time scales that affect the correlation between human activities and the resulting changes in the atmospheric composition. These time scales can be as slow as a few thousand years for the mixing time of deep ocean water and surface ocean water (Chapter 4). We must adjust our activities to avoid large deviations from atmospheric equilibrium. We will be wise to remember Le Chatelier’s principle— one way or another, the physical system will reestablish equilibrium. If humans disturb the equilibrium in a significant way, then one possibility for restoration is to dislodge us from the top of the food chain.

Collective choices come mostly through the political process. Political decisions that will affect the future must be based on some ability to predict the future. Predicting the future on the time scale needed here is a very risky business. In Chapter 1 I tried to discard the easy choices: relying on somebody else to make the predictions or learning from the experiences of others.

CLIMATE AND WEATHER

The alternative that remains is modeling. The idea is to model the global climate, introduce anthropogenic contributions, and monitor the changes that take place in the model. Climate is defined as “average” weather. The time scale for averaging is not well defined; it can span a year or several decades. Weather predictions and climate predictions are related, but they are not identical. Weather predictions are considered one of humanity’s oldest dreams. Initially, in many religions, the weather was seen as a divine response to human behavior. Science now is in the process of adopting the same thesis with a distinctly different methodology and reasoning. Weather forecasting was initially part of mythology, superstition, and folklore, with early forecasters being high priests, witch doctors, or medicine men. Advanced scientific weather forecasting, based on the solutions of equations that reflect rigorously tested physical laws,

109

is a skill acquired during the 20th century, and the effectiveness of the predictions more or less parallels the advances in digital computing. The usual validation of weather predictions is done by experiencing the real weather and comparing it with the predictions. The time span for predictions is measured in days, although recent long- term predictions measured in weeks and months are becoming more common. In many aspects climate predictions are easier than weather predictions because they smooth over sharp local and spatial extreme events. However, because present political decisions are driven mostly through response to local events, there is a strong incentive to develop climate models accurate enough to predict localized extreme events. Both the weather and the climate systems are modeled based on the physical laws described in the previous chapters. This is complicated because we saw in the previous chapters that the atmosphere is coupled to the oceans and to the land biota. We illustrate this complexity again in Figure 8.1. To monitor the anthropogenic contributions that amount to a mere 3% of the natural carbon fluxes, we need to model the carbon and water cycles. But the biggest hurdle to modeling is probably not the modeling of the physical environment but the modeling of the human environment. One of the difficulties is cultural and educational— physical scientists investigate and model physical environments. Models are based on physical

Radiation

Ocean Land Human activities Sea level rise Global average T Atmosphere CO2 CH4 N2O SolarThermal

Figure 8.1. Elements of anthropogenic contributions to climate modeling

laws repeatedly tested under controlled laboratory conditions. Human behavior, however, individual or collective, is investigated by social scientists. The main tools that social scientists use are statistical correlations between measurable quantities. Here we are required to use both methods. It is not surprising that some of the harshest criticisms of the results of the future impact of anthropogenic atmospheric emissions come from economists and politicians. More about this later.

THE INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE

The kind of coordination we are looking for on a global scale requires global political and scientific cooperation. On the scientific side, the global community met this challenge by creating the Intergovernmental Panel on Climate Change (IPCC) in 1988. The political coordination is still a work in progress, which will be discussed in Chapter 13. The IPCC was created by two United Nations (UN) organizations: the World Meteorological Organization (WMO) and the UN Environment Programme (UNEP). The role of the IPCC was to assess the scientific, technical, and socioeconomic information relevant to understanding the scientific basis of human- induced climate change. The IPCC does not collect data or carry out research; it acts to produce assessments based on published, peer- reviewed, scientific literature. Th ey publish periodic assessment reports (in 1990, 1995, 2001, and 2007) and technical reports. Some of the information we have presented so far is based on these reports.1

The constant interplay between governments and science unavoidably puts the IPCC in the middle of the global political debate on climate change. The IPCC tries to be policy relevant and policy neutral— a shaky ground to stand on. The IPCC shared (with Al Gore) the 2007 Nobel Peace Prize for the effort to “build up and disseminate greater knowledge about man- made climate change and to lay the foundations for the measures that are needed to counteract such changes.”2 The role of the IPCC will continue to be discussed as we proceed to discuss human involvement. Most of the recent political global activity related to climate change is stimulated by their reports. Because the main role of the IPCC is to provide the scientific and technical basis for political action, the main instrument at their disposal is modeling. They must try to predict outcomes of actions and inactions for all the issues related to climate change. It is not surprising that most of the modeling attempts are centered on this organization. The information in this chapter will be largely based on the IPCC’s 2007 report. Before we address the specifics of modeling, it will be useful again to look at the modeling on a more abstract level.

Box 8.1

DETERMINISM, CHAOS, AND UNCERTAINTY

The era of modern science arguably starts with what is often referred to as the Copernican revolution, named after the Polish priest Nicolaus Copernicus. Copernicus’s main contribution was the suggestion that astronomical observations at that time are easier to explain if Earth revolves around the sun, rather than, as was the common belief at the time, if the sun revolves around Earth. The culmination of this revolution is commonly associated with Isaac Newton, an English scholar who, among his other contributions, formulated three basic laws that associate forces with the detailed movement of objects and defined the basic force responsible for the motion of celestial objects. Newton’s first law states that if an object does not experience a net force then it will remain at rest if it was at rest, and if it was not at rest, then it will continue to move at constant velocity in a straight line. Newton’s second law states what will happen to an object that experiences a net force.The object will accelerate in the direction of the force.The formula that describes this law is very simple and given as

F = ma, [8.1]

where F is the net force and a is the acceleration. The proportionality constant is the mass m that describes the mass of the object. For a given force, a big object will accelerate a little, and a small object will accelerate a lot. Newton’s third law simply states that forces never operate alone; they always come in pairs.That is, for every action that one object exerts on another, there is a reaction force that the second object exerts back.

The issue of direct interest to us here is the second law. Acceleration is the change of velocity with time.The second law tells us that if we know the net force being applied to an object and we know its mass and its position and velocity at any time then we should be able to calculate the changes in velocity (because we know the acceleration) and the changes in position at any time. More than that, we can apply this simple formula backward and calculate the entire history of the object. If we now extrapolate this statement to the entire universe— if we find all the forces that act on objects in the universe and the masses of these objects and measure the positions and velocities of all these objects at one particular time— then Newtonian mechanics tells us that we should be able to calculate the future and the past of the universe.The universe, according to this picture, is fully deterministic. All that we need in order to model the future of the universe is to get complete information about its present. Moreover, we can validate the model by calculating the past and compare it to what we actually know happened in the past through direct measurements. In many ways, this is the essence of modeling the climate of Earth.We assume that the system is Newtonian.

The weather is famous for being unpredictable, and indeed we cannot necessarily predict the exact future of every Newtonian system. Our prediction depends on evolving the system from some initial conditions. If the system turns out to be very sensitive to these initial conditions, then very small changes in the initial conditions, which can result from unavoidable small errors in measurements, can lead to widely divergent evolutionary trajectories.We call such systems “chaotic.” Major contributions to the understanding of chaotic systems have come from meteorologists trying to predict the weather.

MODELING

The usual outcomes of modeling the anthropogenic contributions to the global climate are future trace-gas concentrations, global mean temperature changes, and predicted mean sea- level rise.1 The reasoning is sequential— we estimate the changes in trace-gas concentrations based on socioeconomic analysis of the global society, which includes projections as to changes in population growth, standard of living, energy mix, and so forth. Th ese estimates let us calculate future atmospheric concentrations of greenhouse gases (GHGs). Th ese concentrations in turn allow us to calculate average radiative forcing. Earth responds to forcing by increasing its average temperature, and the rise in the average temperature, in turn, causes the sea level to rise. The modeling involves covering the area that we model with a grid. Each unit of the grid has inputs and outputs of material and energy, depending on the changes that take place within the unit. W hat usually distinguishes relatively simple models from much more complicated models is the size and dimensionality of the grid. Even the most complex climate models used to project climate over the next century have a typical horizontal resolution of hundreds of kilometers. The computing power needed to increase the resolution to the size of a small country or a typical size of a large city is not yet available. Many important elements of the climate system, such as clouds and land structures, are much smaller than this scale. The influence of these subgrid elements on the climate model is usually introduced through a process called parameterization. These parameters are typically some average quantities either computed with much more restricted models or directly measured at few locations. All models, no matt er how complex, require parameterizations and the uncertainty associated with them. A valid question is, why not always use the most elaborate, three- dimensional models? The main reason is time and economics. If one needs to investigate changes in predictions as a result of changes in many input parameters, then it is often impractical to use the most expensive and time- consuming computer models.3– 5

FEEDBACKS, UNCERTAINTY, AND PREDICTIONS

As Figure 8.1 reminds us, land, sea, and the atmosphere are interconnected through various chemical and physical processes that take different times to equilibrate, ranging from processes that have practically instantaneous equilibration to processes that can take thousands of years, such as the equilibration between deep ocean water and the atmosphere. Th ese equilibration processes change with the temperature. Water evaporation can serve as a good example for some of the complexities of these interdependencies—this issue was discussed in some detail in the previous chapter. Other areas in which feedbacks play important roles include snow and ice coverage, vegetation coverage, and the various processes that take place in the carbon cycle. Most models include at least some aspects of the fast-feedback processes—but the uncertainties are significant. These uncertainties are compounded by the uncertainties in modeling human contributions to the emissions of GHGs because such modeling requires estimates of future social behavior, including future energy policy, price of fuel, changes in the standard of living, and changes in wealth distributions—not counting catastrophic events such as wars and natural calamities. The concept of climate sensitivity was introduced to separate the uncertainties involved in studying the feedback processes and uncertainties in estimating future human behavior. Climate sensitivity calculates the expected average global temperature increase for a given increase of the atmospheric concentrations of GHGs. A typical yardstick often used is the expected global average temperature increase as a result of a doubling of atmospheric carbon dioxide concentrations from preindustrial levels.

A typical set of predictions from the IPCC IV report are shown in Figure 8.2. Figure 8.2 shows a bound area that indicates the uncertainty in the predictions and a line in the middle of the most likely prediction. The horizontal axis measures the GHG concentrations in units of CO2 equivalents (see Chapter 6 on radiative forcing). Taking the IPCC metric for climate sensitivity as measure of the equilibrium temperature due to doubling the GHG concentration from the preindustrial level of 280 ppmv, one obtains value of 3°C with an uncertainty of 2°C to 4.6°C. In principle these numbers should reflect pure physical measurements with much smaller uncertainty because one should be able to construct a relatively simple model such as a greenhouse, adjust the properties of the glass to reflect the desired radiative forcing, and measure the resulting temperature increase for various light intensities. Aft er all, Earth is a physical system that one could simulate. In principle, this should have nothing to do with ability to predict the future because all the uncertainty in predicting the future is anchored on the prediction of the atmospheric concentrations of the GHGs. Th e main uncertainty in the predictions of Figure 8.2 rests on our limited knowledge of a complicated greenhouse such as Earth. Two issues stand out. First, the temperature in Figure 8.2 is

Figure 8.2. Projected global temperature increase as a function of the increase in atmospheric concentrations of GHGs expressed in CO2 equivalents. The roman numerals indicate various stabilization scenarios.

Source: Intergovernmental Panel on Climate Change (2007).5

equilibrium temperature, and we do not have a very good idea about equilibration times of the temperature mainly, again, because of the strong interconnection between the atmosphere and the land and the sea. Second, the uncertainty also rests on the fact that a significant contribution to the climate sensitivity comes from the contributions of the feedback mechanisms previously described. Feedback mechanisms will also be discussed in future chapters and are often referred to as tipping points. Figure 8.2 also includes division of the climate sensitivity area into a few domains that describe various stabilization regimes. In future chapters I will explore the probable consequences of these regimes.

VALIDATION

How do we know how good these predictions are? In terms of weather forecasting, we have no problems—the next day or so we know how close the forecaster was to the actual weather, and in time we develop either faith or skepticism about the ability of the forecaster. Decadal climate forecasting is a different business. None of us will live long enough to test the accuracy of such forecasters. Yet we are expected to devise policies based on these forecasts—some of which require painful sacrifices.

There are three ways that existing climate models are being tested:

  • 1. The first method is ver y similar to validations of models used for weather forecasting. The climate model can run over a number of years of simulated forecast, and the climates generated by the model are constantly compared with obser vations. For a model to be considered valid, the average distribution and seasonal variations of parameters such as surface pressure, temperature, and rainfall must compare well with observations. All present climate models being considered by the IPCC pass this test.
  • 2. In the second method, models can be compared against simulations of past climates when the distribution of past climatic variables, such as variations in the orbital motion of Earth around the sun aff ecting the distribution of solar energy on Earth, is significantly different from their present values. The largest manifestation of such changes is during ice ages. In Box 8.1 we saw that the basic assumption made on the global system is that it is deterministic and Newtonian. We know of many physical non-Newtonian systems. These systems may be too small, and as a consequence, both their position and velocities cannot be determined simultaneously to a perfect accuracy (i.e., they are subject to Heisenberg’s famous uncertainty principle). Or these systems move very fast, resulting in space and time that change with velocity; such systems are subject to Einstein’s relativity theory. Presently, nothing in Earth’s climate system requires deviating from Newton’s laws. The Newtonian approximation is a very good approximation for Earth’s material and energy balance. A key feature of the Newtonian approximation is that once we know in detail the situation at any particular time, we can, in principle, insert all the velocities, positions, masses, and forces into Newton’s equations and calculate the properties at other times—future and past. These systems should be symmetrical to time reversal. The time- reversal symmetry enables us to test the models by inserting present- day measurements to calculate the climate in the past. We can try to validate these “predictions” through direct measurements by using the techniques described in Chapter 3.
  • 3. The third way is to use the models in order to predict effects of large perturbations on climate. These perturbations include the El Niño effects, large volcanic eruptions, and so on. Here the present models are not yet very effective mainly because of the limited spatial resolution that they off er.
  • THE RISE IN SEA LEVEL

    Sea- level rise is probably the most directly disruptive predicted outcome of global warming.6 Over half the world population, about 3.2 billion people, presently occupy a narrow zone of a width of about 200 km that stretches along the coasts of the global oceans and are bound to be directly affected by a significant rise in the sea level.

    History

    Figure 3.2 in Chapter 3 describes the temperature history of Earth over the last 400,000 years. The data for this figure came from the study of the Vostok ice core in Antarctica. The main climatic cycles in this figure are astronomically driven temperature oscillations with a dominant period of about 100,000 years. The figure shows that we are now in the warm phase of the last ice age that started more than 100,000 years ago with the temperature dropping by about 5°C. The temperature oscillated around this value for the next 80,000 years. About 20,000 years ago, the temperature started to rise again. In the previous chapter we saw connections between the energy cycle and the water cycle. As the global temperature started to fall, the temperature difference between the polar regions and the equator was also increasing. This process drove water evaporation from the oceans to end up as precipitation on land in the form of snow and ice. As snow falling in the wintertime exceeds snow melting during summertime, glaciers form and grow. The formation of glaciers peaked around 20,000 years ago. Glaciers at that period covered much of North America, Europe, and large areas in Asia. About one- third of the present land surface was covered with ice. This process essentially moved water from the oceans to land. The resulting sea level at the peak of this process was about 125 m below today ’s sea level—Alaska was linked to Siberia, England was linked to the European continent, Papua New Guinea was linked to Australia, and Tasmania and Malta were not islands. As the climate started to warm, the melting during the summer started to surpass snowfall during the winter, and the glaciers started to melt and retract. Water started to move back from land to sea with a resulting rise in the sea level— this process continues to take place today. The warm period in the cycle is called the interglacial period. There is strong geological evidence to suggest that during past warm periods, sea levels were on the average between 3 m and 20 m higher than current sea levels.

    Mechanisms for Sea- Level Rise

    The sea level rises during the warming periods for two main reasons. First, with a few important exceptions, all objects expand with increasing temperature. The water in the oceans at

    temperatures above 4°C expands with increasing temperature. As a result of this expansion, the volume of the water increases and the sea level rises. A simple quantitative estimate of this process is given in Box 8.2. Second, ice caps, ice fields, and mountain glaciers are melting. In this process water returns from land to sea. This process is more difficult to model because it depends not only on the average temperature but also on the distribution of temperatures between different regions. Such distributions are difficult to model because of the models’ limited spatial distributions. The projected sea- level rise is at the end of the predictions of climate changes. All the uncertainties accumulate. The most important parameter for the prediction of sea- level rise is the predicted rise in average global temperature. Figure 8.3 is an example of the IPCC predictions of future sea- level rise superimposed on actual measurements from the beginning of the 19th century. The sea level was gradually rising in the 20th century and continues to rise with an accelerated pace.

    Sea level change (mm) 500 400 300 200 100 –100 –200 0 Estimated of the past Instrumental record Projections of the future

    Figure 8.3. Predictions of the sea- level change under climate change scenarios Source: Intergovernmental Panel on Climate Change (2007).5

    1800 1850 1900 1950 2000 2050 2100 Year

    Box 8.2

    THERMAL EXPANSION

    With a few important exceptions, the volume of all objects increases with temperature. The parameter that describes the degree of this expansion is the coefficient of thermal

    expansion, defined as
    Coefficient of volume expansion = 1 volume ⊗ Chang e in volume Chang e in temperature [8.2]

    A very important exception to this rule is water at temperatures between 0°C and 4°C. At this temperature range, the volume of a fixed quantity of water decreases with rising temperature. The coefficient of thermal expansion becomes negative. Because of this property, the volume of a given quantity of liquid water is smallest at 4°C. Because the density of a material is defined as the ratio of weight to volume, the density of liquid water is highest at 4°C.This means that water at 4°C is heavier than water at 0°C, where it can be in equilibrium with ice.The heavier water can sink and the ice can float on top of it.That is why lakes and ponds freeze first at their upper surface.

    Calculati ng the Rise in Sea Level

    We will calculate here the expected sea- level rise due to thermal expansion of ocean water as the average global temperature rises by 3°C, a value that roughly corresponds to doubling of the preindustrial levels of CO2 atmospheric concentration. The coefficient of thermal expansion of water at 15°C is 1.5 × 10– 4 C– 1 (the units of this coefficient are inverse temperature). The area of Earth’s ocean is 0.36 billion km2. To calculate the volume of water, we need the depth. The average depth of Earth’s oceans is about 5 km. However, as was discussed in Chapter 4, in the context of the carbon cycle, differences in salinity and temperature result in a density distribution that affects the mixing between the deep ocean and the surface ocean and slows it to the order of thousands of years. It will take that long for the extra heat to diffuse to the deep oceans. For shorter periods (compared to human experience), the surface ocean and the deep ocean are thermally isolated from each other. The dividing line is sometimes called the thermocline. On average it falls at a depth of 1 km. This layer is also not uniform: the surface layer that spans approximately the first 100 m is homogeneous because the mixing due to surface winds and surface currents is very efficient.The next layer, called the pycnocline zone, is a transition layer between the deep ocean and the surface layer. In this layer the temperature changes with depth such that the upper boundary matches the temperature of the surface zone whereas the lower boundary matches the temperature of the deep ocean.

    Let us now calculate the rise in the sea level due to the thermal expansion of just the surface level due to a homogeneous temperature rise of 3°C. In the previous equation, the volume appears in the denominator and a change in volume appears in the numerator. We can assume to a very good approximation that as long as the sea- level rise is small, the change in the surface area is small and can be neglected. That means that the unchanged surface area of the oceans appears in both the numerator and the denominator and cancels out, and the equation can be rewritten as

    1 S ea lev el rise Coef ficient of volume expansion = ⊗

    [8.3]

    Depth of surf ace l ay er Chang e in temperature .

    C– 1),

    If we now insert the value of the coefficient of thermal expansion (1.5 × 10– 4the depth of the surface layer (100 m), and the change in temperature (3°C), then the expected rise in sea level should be 0.045 m or 4.5 cm. Much more elaborate models that take into account temperature changes in the pycnocline levels predict, under the same circumstances, a rise in sea level due to thermal expansion to be in the range of 15– 28 cm.

    ANTARCTICA

    It is obvious from Table 8.1 that catastrophic consequences will result from the melting of the Antarctic ice. Antarctica is not yet melting. Ice cover of Antarctica is slightly increasing. In Chapter 14, where I discuss early signs of climate change, we will see that at the edges, ice starts to break. In terms of sea- level rise, Antarctica now ser ves as a water sink and not a water source.

    Antarctica is the southernmost continent, covering approximately 14.2 million km2 (5.5 million square miles). It is divided into two unequal parts, East and West Antarctica, separated by the 3200 km long Transantarctic Mountains. East Antarctica, a high, ice- covered plateau, is the larger of the two regions. West Antarctica consists of an archipelago of mountainous islands covered and bonded together by ice. A great deal of the ice in West Antarctica is bonded to the rocks on the floor of the sea and thus lies below sea level. The average thickness of the ice in Antarctica is 2 km. Many parts of the Ross and Weddell Seas that penetrate the continent are also covered by ice shelves and ice sheets that float on the sea.

    Table 8.1.

    Estimated potential maximum sea- level rise from total melting of present- day glaciers

    Location Volume (km3) Potential sea- level increase (m)
    East Antarctic ice sheet 26,039,200 64.80
    West Antarctic ice sheet 3,262,000 8.06
    Antarctic peninsula 227,100 0.46
    Greenland 2,620,000 6.55
    All other ice caps, ice fields, and valley glaciers 180,000 0.45
    Total 32,328,300 80.32

    Source: US Geological Service.7

    Antarctica was ice- free during most of its history, but it was different both in location and shape. It was further away from the South Pole and it included Australia and parts of Asia and South America. Continental drifts over time are responsible for the present layout of the continents. This does not exclude the possibility that Antarctica will again be ice- free. Th e climate in Antarctica is very cold: the mean winter temperatures are – 20°C to – 30°C (– 4°F to – 22°F) on the coast and – 40°C to – 70°C in the interior. Midsummer temperatures are 0°C (32°F) on the coast and – 20°C to – 35°C (– 4°F to – 31°F) in the interior. On the Antarctic Peninsula summer temperatures can be as high as 15°C (59°F). Because of the low temperatures, the Antarctic atmosphere is very dry. W hat little humidity in the air there is comes from the ice- free regions of the surrounding southern oceans. As a result of the low humidity, the average precipitation in the polar plateau is only 50 mm (2 in.; water equivalent) per year and about 10 times as much in the coastal regions. In that sense the polar plateau must be considered like a dry desert in spite of the fact that the surface is covered with a thick layer of ice (the common definition of a desert is an area with annual precipitation of less than 10 in.). As the projected temperature rises, the humidity will increase and as long as the average temperatures remain well below freezing, most of the resulting precipitation will be snow, resulting in a net removal of liquid water from the oceans and thus a net decrease in sea- level height.

    TIDES, GRAVITY, AND HOW HIGH IS HIGH

    Are the oceans one big, flat surface that stretches around Earth, around which about half the world’s population has settled, so that any rise in sea level relative to the surrounding land is a major threat? Not exactly.

    The height of the surface of the oceans and connected estuaries, bays, and lagoons is in constant change. Every 25 hours and 50 minutes we experience two high tides and two low tides. The height difference between the high and low tides in the middle of the ocean is about 1 m (3 ft.), but in a few places it can reach much greater heights. In the Canadian Bay of Fundy, the difference between high and low tides can reach 16 m (54 ft .). These height changes are sometimes depicted as the “heartbeats” of Earth. W hen humans inhabit the coastline, they usually have to take into account these heartbeats; otherwise they can find themselves in deep water. W hat are the reasons for these heart beats? In one word, gravity. A simple description that connects gravity with tides is given in Box 8.3.

    Box 8.3

    TIDES AND GRAVITY

    The formulation of the law of gravity and its relation to the formation of tides can be traced to Isaac Newton, the same giant of modern science discussed in Box 8.1 in the context of his laws of motion.

    Newton felt that the laws of motion that worked on the surface of Earth should also apply to the motions of stars and planets in the sky.

    The moon is orbiting Earth, and all the planets orbit the sun.What force drives their orbital motion? Newton’s genius was to recognize that this force that keeps us “glued” to the surface of Earth is the same one responsible for the fall of all unsupported objects. Newton was able to quantify this force of gravity with the following equation:

    M ass of obj ect1 × M ass of obj ect 2

    F orce of grav i ty = Constant × , [8.4]

    Square of d i s tance between the obj ects

    where the constant is a universal constant measured with a great deal of precision by using laboratory experimental setups. The gravity force in this form explained the trajectories of heavenly bodies but also predicted the existence of unknown planets from the deviations from the calculated trajectories.These predictions were later confirmed by direct observations. Now let us look at the Earth– moon system a bit more closely.

    Earth and the moon are two masses that attract each other through gravitational force. The force changes with the square of the distance between the objects. Point A in Figure 8.4 is closer to the moon than point B. As a result the gravitational force on a given mass of A is stronger than on a given mass of B.This difference in forces has only slight effects on solid materials, but water is free to move, and the water of point A moves away from its solid support closer to the center of Earth, which experiences less

    of a pull because of the larger distance.The reverse is happening at point B but with the same final result. Here the solid support is closer to the moon and it pulls away from the surface water. The result is again high tide. Between A and B, we have the points of low tide. The result is two high tides and two low tides per cycle. A cycle consists of Earth’s spin around its axis corrected for the changing position of the moon during this time period. Because the two motions are in the same direction, one cycle takes about 25 hours and 50 minutes.The sun’s gravitational pull makes a contribution, but it is considerably smaller than that of the moon because of the much larger distance between Earth and the sun.

    Earth AB

    Figure 8.4. Earth and the moon

    Moon

    M

    RAMIFICATIONS OF UNCERTAINTIES

    Emphasizing the uncertainties involved in our attempts to model the future was unavoidable until now. Uncertainties and margins of error tend to leave us skeptical, and the most obvious retreat in the face of such skepticism is to do nothing and hope everything will turn out OK. In spite of these uncertainties, the overall trend is inescapable: the continuing use of fossil fuels is changing the composition of the atmosphere, and this change leads to an average temperature increase that in turn will cause the sea level to rise. How much of a rise and when it will happen are subject to uncertainties. The time that most of the models address is hardly beyond our definition of “now ” (Chapter 2). Uncertainties are funny— we might be wrong in either direction. If we could be wrong, then it might be useful to take (and prepare for) the worst-case scenario. As far as humans are concerned, the worst case scenario is that the global temperature will rise to such a degree that all global ice and snow will melt. Table 8.1 shows the potential rise in sea level as a result of such a meltdown. The rest of the book tries to analyze the consequences of our role.