Monday, March 21, 2011

Let Me Count the Ways



Even people who claim (sometimes proudly ) to be bad at mathematics can usually do addition or multiplication of multi-digit numbers, using pencil and paper. Such feats would have astonished our Roman forbears, engineers and all, because the Hindu-Arabic number notation system that allows the easy expression and manipulation of large numbers did not even exist until about 800 AD. Nor did the concepts and tools for fractions and decimals that support modern technology. It required the concept of zero as a number to make this notation system possible, and that concept occurred about 300 AD.


The earliest attempts to symbolize quantities of things that it was important to record took the form of hash marks on a malleable substrate- e.g. a clay tablet, stone, or etc… one hash mark per item. The next step up is probably one that is still used today- groups of five. That is, four vertical hash marks, followed by a diagonal line that cuts through them. This is a numbering system with only two symbols, for five and one, and the choice of five as the quantity for the next symbol is obvious.

It is only a short step to realize that five can also be represented by any symbol different than the one used for the one. For example, a one can be represented by a finger on the left hand, with a right-hand finger representing a five. This provides a handy way to represent any number up to 55 by a show of fingers.

We could also extend the range of representation by using multiple hash marks until we reach 9, then use a different symbol to represent 10. This was the approach used by the Babylonians; each “Y” symbol represented one, while each “<” symbol represented ten. By piling up combinations of these two symbols, they would count to sixty. Thus, for 51 the symbol is <<<<<Y ( 5 tens and one one). 

To count even larger numbers, the Babylonians introduced the concept of positional notation, a key feature of modern numbers. When the count reached sixty, the Babylonians started over with the symbols, adding a Y to the left to represent one group of sixty, followed by another group to the right to represent the excess over sixty. After sixty groups of sixty, yet another column to the left was provided. There is some potential confusion with this system that requires the symbols to be carefully grouped and spaced- see:


The Babylonian choice of sixty as a number base is still with us, in measures of times and angles. The important concept here is that of position of a symbol being important as well as the shape of the symbol. The Babylonians were thus able to represent any number with a combination of only two types of symbols. This feature became important in the development of the abacus.

The South American Incas also caught onto positional notation as a way to represent numbers, and they did it with a base 10 representation. In this case, inventories were recorded as knots along a string, in a bundle called a quipu. There was basically only one symbol, i.e. a knot, but the position along the string determined whether the number of adjoining knots represented the number of ones, tens, hundreds, or etc. within the represented quantity. This representation was purely positional. See: http://www-history.mcs.st-and.ac.uk/HistTopics/Inca_mathematics.html

This point was lost on the Romans. Their contribution was to extend the number of symbols used, to represent groups of different size, e.g. I for ones, V for fives, X for tens, L for fifties, C for hundreds, and etc, but the symbols were never re-used to represent different quantities in different positions.  A Roman numeral representation of large numbers is hard to read and impossible to use for calculations. Their obscurity is useful for movie makers who don’t want to date their movies with copyright numbers that are easy to read, e.g:   MCMLXXXIX= 1989.

The modern Hindu-Arabic notation has much to do with positional notation, and has a symbol that represents zero, as a place-holder to avoid potential confusion. With this notation, we can use any desired number N as the basis of the representation, where N is usually 10 (for decimal notation), but N=16 (hexadecimal) and N=2 (binary) are also useful and popular. For base N notation, we require N-1 separate symbols and 0 (zero).
In a base 10 number system, a three symbol number (say abc, where the letters represent digits) is interpreted as follows:
The number of ones is c.
The number of tens is b.
The number of hundreds is a.  And so forth.
Each position to the left represents another power of N (here, 10).  The symbol sequence abc means:
                                a x 102 + b x 101 +c x 100
This sequence could also be interpreted in base 16 as:
                                a x 162 + b x 161 +c x 160
Or in binary as:
                                a x 22 + b x 21 + c x 20
For fractional numbers, we just continue beyond the decimal as follows (in base 10):
                   abc.de a x 102 + b x 101 + c x 100 + d x 10-1 + e x 10-2
Clearly, we can also represent fractional numbers with any other number base.

Hindu-Arabic notation using base 10 representation has become the accepted language for counting things, but there are some numbers that are just too big for it, e.g. national debts, distances to stars, or the number of angels on the head of a pin. For really big numbers, the most important symbols are the ones in the three leftmost columns. Anything to the right of them is relatively small potatoes. 

It is nearly universal, in such cases, to round off to the nearest number of millions or billions or septillions or googles (1 google =10100). Notice that we are again borrowing the Roman practice of giving separate names to groups of different size: (1000 = “thousand”, 1000 thousand= “million”, 1000 million = “billion, and etc.)

The symbolic way to handle this is scientific notation, where the appropriate power of 10 is specified, e.g. 123 million is written as 123x106.  But it is usually expressed with only one digit before the decimal point, i.e.
                                123 million = 1.23 x 108
                                123 billion = 1.23 x 1011
This notation lends itself to simple algorithms for multiplication, division, exponentiation, etc., and extends as well into the range of really small numbers, e.g. a length of 5 billionths of a meter is represented as 5 x 10-9  meters.

For scientists, the exponential notation is usually sufficient to represent any physical quantities that arise in the study of the things in the universe: The number of hydrogen atoms in 1 gram of the substance is about 6 x 1023.  The number of atoms in the (visible) universe is about 1080(considerably less than a google). The smallest unit of distance that it makes sense to talk about (the Plank length) is a little over  10-35 meters.

To be sure, exponential notation is not actually required to display such large numbers, but only more convenient than writing down all the zeros. Even a google (10100) could actually be written down as “1” followed by 100 zeroes. But some numbers of interest are far too large to be handled this way: In string theory, calculations can be made about the possible number of universes that could be generated with different force laws, compared with the combination of force laws that we actually measure. In such calculations we arrive at numbers such as 10googol.

This is a “1” followed by a google of zeroes (called a googleplex). This cannot be written in conventional notation. If we could write one “0” on each atom in the known universe, we would run out of atoms before we could write down the entire number. And it would take a long time.

Physicists deal with some large numbers but,  for mathematicians, numbers never stop. They recognize- and can prove- that some things are infinite in size, e.g.  the number of integers, the number of prime numbers, the number of points on a line, and etc. They questioned whether some infinities are bigger than other infinities. One might think that an infinite number of things would be hard to count, but a mathematician named Georg Cantor figured out ways to do it. And the amazing result was that he proved that some infinities are of the same size, while other infinities are much bigger- infinitely so.

For example, the “rational” numbers are composed of all the integers (positive and negative), plus all the numbers that can be represented as ratios of two integers. There are an infinite number of such numbers. But Cantor showed that the number of rational numbers can- in principle- be “counted”. Meaning that you can put every possible rational number into a one-to-one correspondence with an integer. So the (infinite) number of integers has the same size as the (infinite) number of all rational numbers; these are countable infinities.

But there also exist all those number that are irrational- they cannot be represented as a ratio of two integers. Their decimal representation extends forever, with the digits (or groups of digits) never repeating [1]. Many numbers fit this category, for example π=3.1415926535…, where the “… “means that the digits go on forever without repeating (and it can be proved that they don’t). The natural logarithm base e=  2.718281828... and the square roots of many integers. e.g. square root(2) =1.414125623…. are also irrational. Any random, non-repeating string of an infinite number of digits is an irrational number.  In fact, between every two possible rational numbers is an infinite number of irrational numbers.  Cantor proved it. The size of the set of irrational numbers is not countable- it is much larger than size of the set of rationals.

Where scientists are usually satisfied with using the symbol ∞ to represent any old infinity, Cantor realized that we need different symbols to represent infinities of different sizes. The symbol typically used to represent the size of an infinity is the Hebrew letter aleph (not available on my keyboard), followed by an integer. Aleph followed by a 0 (called “aleph null”) is the smallest infinity, and successively larger infinities are represented with successively larger integers following aleph. 

Mathematicians have since been amusing themselves by classifying mathematical infinities into the various sizes, and determining whether the number of possible infinities is discrete and countable, or continuous and uncountable. Normal minds tend to spin out when contemplating such issues.

The result is that humans have now progressed to having a language and a written representation for the expression and manipulation of numbers sufficient to handle all humanly imaginable and unimaginable quantities. It remains an open question whether there are an actual infinity of universes, but physical reality has a long history of accurate correspondence with the imaginations of mathematicians. I wouldn’t put odds against it.
GPR: 3/21/11

[1] Any number that has an endlessly repeating series of digits can always be expressed as a ratio of two integers. E.g.  1.125125125… =1124/999.  It is a neat trick of math to do this calculation.

Wednesday, January 26, 2011

Can We Just Cooperate?

A living cell is a structure of immense complexity, comprised of a small bag of salty water filled with various polymer chains made from organic acids. Different polymers are clumped into different locations within the cell, to provide structural strength and to function as specialized production sites for energy generation, internal and exterior communication, waste management,  infrastructure maintenance and recycling, and the construction of clones by cell division. Cells are, in fact, self-sufficient communities that harvest their environment for energy and construction materials, like small prehistoric communities of people, or like termite mounds.  Different types of single cells have different sizes, ranging from a typical size of 0.01 mm for bacteria, up to over three pounds for an ostrich egg.

How such specialized machinery as proteins formed from the primordial soup, much less collected together form stable and self-sufficient communities is a major puzzle. And yet this happened relatively early in the 5 billion year history of the earth- within about 1 billion years after the crust first formed, according to the dating of stromatolite formations in shallow ocean waters. Varieties of cells have since proliferated to fill every available niche in the ecosystem, wherever there is any type of energy source and organic material on which to make a living, from deep ocean hydrothermal vents to Antarctic snow fields to upper atmospheric dust particles. The journey from unicellular life to multicellular structures took longer than did the first cell- about 2 billion more years- wherein different types of cells got together in cooperative efforts to communicate, metabolize and reproduce, forming meta-communities of the individual cells. A pattern was developing here.

It was only in the past 10% of earth’s history- about 0.5 billion years ago- that multicellular life really took off in the Cambrian Explosion… or at least generated sufficiently hard structures to form fossils that we can find. This breakthrough seems to have been triggered by the gradual transformation of our atmosphere- by cells that use chlorophyll-based metabolism- to provide an oxygen-rich environment for the oxygen breathing cell structures that are the basis of animal life. A new layer of community was added; plants using sunlight to turn CO2 into organic polymers + oxygen, and animal species using the plant material + oxygen to generate their own new cells + CO2… a perfect marriage.

The old, anaerobic cells never went away, but had to find environments away from the poisonous (to them) oxygen, in deep sea vents, underground compost pits, or inside the guts of animals where they partnered up to help with food digestion. A modern human is a vast community of different types of cells, most of which have been adopted from outside and bear someone else’s DNA; about 200 different kinds of DNA, with the total number of such visitor cells far outnumbering the number of human cells that are generated after the union of sperm and ovum. Since some types of bacteria are inimical to human health, it can be tricky business to choose antibiotics that selectively remove only the bad guys. Also, it is not uncommon for the human immune system to turn against some internal bacteria that mean no harm, and generate auto-immune responses that benefit nobody.
While the evolved interdependence of cells, bacteria, viruses, plants, animals, and humans within earth’s ecosphere is a marvel of efficiency and complexity, there are some cycles of dependency that are just bizarre. Herewith, from the site everything2.com, about the life cycle of a liver fluke that affects cattle:
The lancet fluke's lifestyle is a migration through the fluids of three hosts; a cow, a snail and an antHungry snails eat the dung of the infected cows and swallow inadvertently the sequestered fluke eggs. Once hatching in the snail's intestine, they burrow through the gut wall and into a digestive gland. Within this gland, flukes reproduce a second generation - spewed back into the world by the tormented snail as balls of slime, each sticky drop, a seething mass of flukes.
It is in the third host, the ant host, where we observe mind control. Foraging ants come across the sparkling orbs of snot, sensing the source of moisture that they are, they quench their thirst on the slick beverage. Entering the third hypersea reservoir, the flukes undulate through the ant's fluids, most form cysts in the abdomen, but some home in on the nerve clusters that control the mandibles in the ants head.
The temperature drops into the coolness of evening, and the infected ant feels compelled to leave its brethren, forsaking them to climb a grass stalk spire to its apex - the pastures emergent canopy. Preparing to make itself a sacrifice, it anchors itself to the flimsy blade, attached firmly by its mandibles.
It waits motionless throughout the night to be devoured by the primary host, the cow. Herds like ruminating clouds pass over the ant, blotting out the stars, the hoof falls reminisant of distant thunderclaps. If the ant survives until morning the flukes relinquish their control allowing the ant to scurry back to join its fellow workers in the gloom and away from the solar furnace which would be death to both the host and the backseat driving parasite. By day the ant is a regular Joe indistinguishable from any other ant, but when night falls, again it makes its ascent into 'munch range' over and over until eventually consumed, drowned in cud, bursting open as a swarm of flukes within the cows stomach. The flukes complete the cycle by penetrating the bovines liver, becoming adult egg producers.
The liver fluke may be a champion in adaptability, but for real imagination in lifestyle, few can compete with the slime mold Dictyosteliomycota, the social amoeba. They spend their lives as individuals, grazing on bacteria on the forest floor. If food becomes scarce, however, the signal goes out and all individuals start crawling to a central spot. There they congregate to become a multicellular, slug-like creature that crawls over the floor to a new, more hospitable site. At the new site, the slug plants itself upon the ground and morphs into a plant-like structure, forming a fruiting body atop a stalk. The fruiting body casts out spores onto the wind, which settle on the floor to become a new colony of amoebas.
This blurry line between individual and communal behavior is echoed in beehives and anthills, where there is a case to be made that the significant organism is the community itself, with individuals being as expendable and replaceable as individual cells are within the human body. A blurry line also exists between life and non-life in the community of viruses.  The tobacco mosaic virus can spend millenia in the form of a crystal, composed of aligned threads of RNA polymers. The crystal has a well-defined structure, which can be used to diffract X-rays like any other crystal. It seems the essence of crystalline solid matter, like table salt. But under appropriate conditions, the crystal disassembles itself to become an army of RNA virus threads that burrow into the cells of tobacco leaves, borrowing the cellular machinery of the leaf to generate more copies of itself. Such viruses cannot reproduce on their own, and so are considered non-living. This seems like a fairly arbitrary distinction; the concept of “alive” seems to be represented along a broad continuum, as perhaps does the concept of “consciousness”. But that’s a story for another time.
The remarkable feature of the evolution from organic acids to polymer chains to cells… all up the line to societies and national democracies is that- at every level of organization- there is a communal effort at work between individual components to form assemblies that communicate, compete, and cooperate with other assemblies. Complexity builds upon complexity. At each level of assembly, there enters the concept of another individual entity, e.g. an organism. a person, a tribe, or a nation.  It is difficult not to see direction and purpose in this.

Monday, January 24, 2011

Letting Go of Common Sense

I spent my first three decades learning how to do physics, and the next four trying to understand it. Doing physics was merely difficult, but understanding it seems impossible. As noted elsewhere, anyone who believes that they understand quantum mechanics has obviously not studied it very well. The fact that the theory works is unquestionable; it makes precise predictions about physical properties that have been measured to be accurate within (at least) one part in 1,000,000,000,000.  But it does this at the expense of treating particles of matter as though they do not possess actual locations or well-defined motions, but only some amalgamation of the two that does not really exist until someone decides to measure one or the other. Since the ultimate test of theory is physical measurement, the outstanding accuracy of quantum theory seems unlikely to ever require an alternative explanation. It is “common sense” that has been shown to be the least reliable way of interpreting reality.

The unreliability of common sense should not be surprising. Human brains evolved to become adept at perceiving and surviving their everyday environment; accurate perception about the workings of quantum particles, galactic encounters, or the origins of the universe was never a requirement for survival. The triumph of science over the past 500 years has been due to the acceptance that the ultimate test of a theory is that it makes correct predictions/descriptions of physical observations that can be made and measured by other observers, regardless how outrageous the theory seems. Our human intellect and intuition has repeatedly been humbled in the face of this requirement.

A common misunderstanding is that scientific knowledge never progresses, but only changes from time to time, as new theories invalidate and replace the old ones. This seems true only if one takes the position that a physical theory should represent “actual” reality, rather than to describe how the universe works. Within the accuracy of measurements available at the time, the Ptolemaic solar system and Newtonian physics made adequate predictions. Only when measurement capability became more sophisticated, quantifying the behaviors of stars, atoms and photons with high precision, were the classical theories found to be inaccurate.

The changes that were needed to make the theories match the measurements have altered our viewpoints on reality in directions that seem incomprehensible; four-dimensional space-time curvature replaced gravitational force, time flows at different rates in different circumstances, and empty space is seething with virtual particles whose existence is temporary. These concepts are not just speculations, but are required in order to make the models mathematically consistent with the measurements. Which is all that we can do. To believe that our models now represent absolute reality is foolish, because our brains are finite and limited to the concepts of our experience. The universe may, in fact, be infinite in space and time.

My own attempts to contemplate an infinite universe with no beginning always produce slight dizziness and nausea. But it is consistent with the mathematics, and the universe has always been found to mathematically consistent. If fact, advanced mathematics has been the one true guide to discovering the nature of physical reality:  from calculus through tensor algebra, multidimensional vector spaces, group symmetries, and the topological properties of Calabi-Yau manifolds which currently underpin string theory. It's not an easy language to learn.

I occasionally see articles predicting the death of the Big Bang model of creation, based on current speculations about multiverses within the cosmological community. These articles are written by non-scientists. Even though science has gathered a reputation for fickleness in its faithfulness to models, the Big Bang is currently in no danger of being replaced by a more accurate version. It has not been contradicted, but strengthened, by increasingly accurate measurements from satellites and telescopes. We understand the first few seconds of the universe in much better detail than we understand the current one. It was much simpler back then.

What is now happening is that many physicists are frustrated by the apparent fine-tuning of our universe, in its gradual evolutionary buildup of stars, galaxies, elements, planets, molecules, organic matter, living organisms, consciousness, and intelligence. It seems a very unlikely and well-ordered progression, and one that would have been impossible if the forces of the universe would have differed in the slightest from what they have been measured to be. And we don’t know what causes those forces to take on the values that they do.

Without recourse to a very intelligent designer for our universe, one possible explanation is that there are many other universes that have also randomly appeared, wherein the randomly-generated force constants have taken on many different combinations, but no universe different from ours would have ever generated life. We’re just the lottery winners, although the players in other universes never got the opportunity to exist. Unless there really are an infinite number of other universes, in which case there are lots of other universes with intelligent life, many exactly like ours.

The multiverse theory in no way contradicts the Big Bang origin of our universe, but is an attempt to answer a question about what happened “before” the Big Bang, and why our universe is so perfect. In my mind, it is less about physics than philosophy, because it seems impossible to make measurements that could verify the existence of other universes outside our own space-time.

One great pleasure and satisfaction of studying science is in the growing awareness of the complexity, interdependence, and inevitability of the evolutionary processes that have created our existence. While we have some knowledge about the sequence of evolutionary events that led here from the Big Bang, we still have no basic understanding of how the complexity of conscious beings emerged from the maelstrom of baryons and leptons that comprised the early universe.  One might expect- in order for entropy to increase- that organized states should devolve into less organized states, so that complex sequences of molecules could never have been generated spontaneously, much less continue to organize into cells, organisms, and thinking creatures. But evolve and organize they do.

Classical thermodynamics is based mostly on the behavior of systems that are near to equilibrium, i.e. a state of maximum probability and disorder, where energy flows have ceased. It is toward such equilibrium that systems eventually evolve, and our physics describes accurately how the resultant evolution and energy flow proceeds. What has not yet been well-studied is the evolution of systems that are far from equilibrium, where large flows of energy are still occurring, for example a planet being bathed in radiation from a star.  In such conditions, entropy can still increase via heat flows, while organization and complexity continue to increase.

These are metastable states- not yet maximized for entropy nor minimized for energy- but stable while the energy flows continue. If science is to understand the spontaneous evolution of complex forms within the universe, it seems that we need much better descriptions of nonlinear thermodynamic systems exchanging energy under conditions far from equilibrium. Long-range order arising from near-term interactions seems to be a common feature among galaxies, ocean waves, brains, and societies. It would be nice to know how this comes about, even if we don't really understand it. If you know what I mean.

Wednesday, January 12, 2011

Dancing on the Titanic

Prologue
It was reported that many of the passengers who were dancing on the Titanic never felt its collision with the iceberg. With a stopping distance of one-half mile, and a turning radius half again that large, the Titanic was powerless to avoid the iceberg that came into view about 45 seconds earlier.  With engines full astern and turning hard to port, the ship's starboard flank scraped along the iceberg, which sliced through the steel plating below the waterline. A 300 foot gash was opened through six of the watertight compartments. A head-on collision which damaged only one or two compartments would have been better, if rougher. The actual damage was far greater than any envisioned by the ship's designers.

Planetary Thermal Physics
The equilibrium temperature of our planet is determined by a single mechanism:  thermal radiation balance. The primary source of incoming radiation is the sun, whose output has been quite stable over recent time.  Of the incoming solar radiation, earth directly reflects about 30% of it back, and the remainder is absorbed into the atmosphere and surface layers. Because the earth is a warm body (about 600F on average) it also radiates energy back out. Not all of this radiated energy escapes, because there are gases in the atmosphere (greenhouse gases) that selectively reabsorb and retain some of it. The effect is similar to a blanket; it causes earth’s average temperature to be higher than it would be in the absence of greenhouse gases- about 600F higher. The earth’s average temperature is such that the net energies of incoming and outgoing radiation are equal. Earth has remained at or very near this average temperature since the dawn of mankind.

Any change in the earth’s reflectivity or absorptivity affects the average surface temperature.  If more radiation is absorbed, then the earth will warm to the point that it radiates enough additional energy to maintain balance.  Although simple in concept, the details of how a given change in earth’s properties will ultimately affect temperature are complex. This is because of feedback effects.  Temperature changes affect several properties that control absorption and reflection, some of which work to dampen the temperature rise, and others which further accelerate it.

CO2 Balance
One of the greenhouse gases that have made earth’s temperature hospitable is CO2. Over the past 400,000 years, its atmospheric concentration has varied between 0.02% and 0.03%  (200 ppm and 300 ppm). The ice ages occurred during the lower ends of this range. Primary sources of CO2   production are volcanic eruptions, combustion processes (fires), and respiration of aerobic organisms, e.g. decomposition of organic matter. (Anaerobic decomposition also occurs underground, and in other areas where oxygen is limited, producing methane gas. Methane is an even stronger greenhouse gas than CO2- more about this later.) Processes that reduce CO2 include plant photosynthesis and the absorption of the gas into the oceans, where it is used by phytoplankton to form sugars, and marine life to form carbonates which eventually settle out as limestone rocks. Plant matter that is not decomposed, burned, or eaten becomes stored underground, as tundra, coal, or petroleum. Over time periods of a few million years, the processes which produce and absorb CO2 stay in rough balance, acting as a global thermostat which has kept earth's temperature sufficiently stable to maintain an unbroken chain of DNA-based life for over three billion years.

Atmospheric CO2 levels began rising above their recent historical range about 300 years ago, most likely due to increased burning of the stored carbon (coal and oil) to provide energy. Current production rate of CO2 via these energy sources is about 30,000 million tonnes per year1. This is over 130 times greater than the historical, average rate of CO2 production by volcanoes. Currently, the atmospheric level of CO2 is about 380 ppm and rising. It is projected to exceed 500 ppm by about 2050- significantly higher than at any time throughout which human civilization has existed. Although 500 ppm is not a large fraction of gases in the atmosphere, CO2 is a very effective greenhouse gas, because its absorption peak is almost exactly centered at the peak of earth’s radiation back into space, in the 15 micron wavelength region. In contrast, neither nitrogen nor oxygen absorbs at these wavelengths.

Another greenhouse gas that is increasing is methane. Methane also absorbs re-radiation from earth, about 20 times more efficiently than equivalent levels of CO2.  Over the same time period of CO2 increases, we have also experienced a doubling of methane levels in the atmosphere.  The additional methane is almost entirely due to animal agriculture2. Along with termites, cows are prolific producers of methane gas, as they decompose cellulose in their anaerobic stomachs.  Animal agriculture releases about 100 million tonnes of methane per year, about equivalent to about another 2000 million tonnes of CO2.

The exact temperature rise that 500 ppm  CO2 (and doubling of methane) would produce is difficult to estimate, because of the feedback effects, but it is virtually guaranteed to raise sea levels enough to swamp coastal cities throughout the world.  Present estimates project a sea level rise between one foot and five feet by 2100.  A three foot rise would require relocation of about half a billion people who live near coastlines. My own state of NC would lose about 2000 square miles of land, putting 30,000 homes at risk. Our problems will be dwarfed by the challenges of moving New York City, Hong Kong, and etc.

Desertification of agricultural land in the mid-latitudes will likely accelerate, further reducing food supplies. Major rivers irrigating Southeast Asia will dwindle as their glacial sources melt away- already these rivers are choking from glacial silt due to recent high rates of melting16,17.  About 20% of the world’s population currently depends on these mountain waters for their agriculture.  Somewhat counter-intuitively, northern Europe might become too cold to grow crops, if the large quantities of fresh water from melting Arctic ice disrupt the northerly flowing Gulf Stream, as has happened during previous climate disruptions. It is unlikely that new agricultural regions can be developed quickly enough to avoid worldwide mass starvations.

Questions about the rate of such climate changes, and thus how quickly humanity must adapt, are hard to predict. That is the current goal of climate modeling, but it must be noted that the measured rates of change in such phenomena as sea ice melting, surface temperature increase, northerly movement of tropical species, and length of growing season have been exceeding the predictions of the models. The changes appear to be accelerating.

Feedbacks
Although a number of physical changes could be identified that would trigger an initial increase in temperature, the resulting chain of feedbacks would be similar. Excess CO2 generation appears to be the main driver of current concern, but the same kinds of changes would also be triggered by increased methane gas, brightening of the sun, or significant loss of vegetation that removes CO2 from the air.  Some of the positive and negative feedbacks are as follows:

The aerobic bacteria that respire CO2 are more active at higher temperatures, so increase their average output as the average temperature increases. This effect is seen daily, with higher CO2 during the day than during the night. The higher average CO2 output leads to higher average temperature which leads to more CO2 generation, and etc. This is a classic positive feedback.

·         At higher temperature, earth’s atmosphere retains additional moisture. Because water vapor is also a powerful greenhouse gas- absorbing strongly in the IR- this is a positive feedback, further aggravating the problem. It has been theorized that the additional moisture would increase cloud cover, causing more radiation to be reflected back into space, but measurements have failed to support this speculation. Higher levels of atmospheric moisture would be expected to increase the frequency and strength of violent storms, because water vapor is such a strong player in generating convection currents that power such storms. This means more and stronger snowstorms, as well as more hurricanes, tornadoes, etc.

·         Higher temperatures cause a northward migration of the permafrost boundary. This uncovers expanses of frozen, fresh vegetation (tundra). Upon thawing of the vegetation, aerobic and anaerobic decay sets in, releasing more CO2 and methane. This is a strong, positive feedback. The northward migration of the permafrost line is already under way18; some Alaskan and Siberian communities are sinking into newly melted ground.

·         Large quantities of methane are locked underwater in the form of solid methane hydrates, at depths below about 1000 ft where temperature is low enough to keep them solidified. With warming oceans, these hydrates can rise to the surface and vaporize (similar to dry ice), causing sudden, large releases of methane gas3. This mechanism is thought to be one of the contributors to the Permian extinction about 250 million years ago, when sudden global warming caused the death of most of the species that existed on earth at that time.

·         Higher CO2 levels are thought to contribute to increased growth of vegetation, which should remove more CO2 from the air- a negative feedback. Experiments have shown that enhanced growth is not guaranteed, however, because other factors such as temperature and nitrogen levels play an even larger role, and the future overall balance is not yet known4,5. Currently, the net levels of worldwide vegetation are decreasing, due to forest destruction at a rate of about 55 square miles per day. Additionally, more frequent droughts in the Amazon forest have been turning it into a net source for CO2, rather than a net absorber, so this is turning out to be another positive feedback. Unless new, replacement forests can be generated in cooler regions.

·         The burning processes that release CO2 also generate aerosols- small liquid and solid particles that float in the atmosphere. One effect of such particles is light-scattering. This increases the amount of sunlight that is reflected, so reduces the absorbed radiation. This is a negative feedback, and one which can be very powerful6. Large volcanic eruptions load the atmosphere with enough particles to cause significant cooling… for a while. Most aerosols have a relatively short lifetime in the atmosphere, relative to the thousands of years that it requires for excess CO2 to be flushed from the system by natural processes. Atmospheric aerosols have other effects, in addition to scattering, because they can act as sites for chemical reactions. An example is the ozone destruction that was caused by atmospheric chlorofluorocarbons.  Aerosols are also known by terms such as haze, smog, and pollution. This is a highly complex topic, and is currently a key area of climate research.

·         An effect of deforestation and desertification is to turn the exposed land brighter- to increase its reflectivity. This is a negative feedback, if neither a desirable nor efficient one. But it is not negligible, and must be considered in future climate modeling.


Other Side-Effects
Elevated CO2 and temperature levels have other consequences that do not alter the climate, but are of significance for those who inhabit it:
·         Elevated atmospheric CO2 causes higher levels of absorbed CO2 in oceans. This generates more carbonic acid. Currently, the oceans are about 30% more acidic than they were in pre-industrial times19. The higher acid levels are detrimental to the growth of many of the calcifying (shell-building) organisms and to corals, which provide havens for much of the oceanic food chain.  Many coral reefs have already succumbed. About 95% of the coral reefs off Florida are already dead, due to a combination of temperature and pollution. There is valid concern that the marine food supply could be the first major victim of high CO2.
·         Tropical species (plants, animals, insects, and diseases) are currently expanding polewards, as their local environments become hotter7,8. This brings new inhabitants into the older ecosystems, and they compete for the same resources. This can have drastic implications, with losses of many of the thousands of interdependent species that comprise local environments9.  One example is the current devastation of white pine trees in the northern Rockies, by pine beetles that are now surviving the warmer winters at the high altitudes20. Currently, world-wide extinction of species is proceeding at a rate about a thousand times faster than during pre-human times- as fast as during some of the past major planetary extinction events10. Global warming is part of the cause, and extinction rates will increase as warming accelerates in the 21st century.

What’s One to Do?
Most climate scientists agree that the maximum CO2 level that would not cause significant (i.e. dangerous) warming is 350 ppm. We have already exceeded that, so some of the above changes will certainly occur. There is a large difference in scale of destruction between 400 ppm and 500 ppm, however, so there are still opportunities to prevent the most appalling scenarios. It will require the concerted actions of governments to force the large changes that are required. Neither individuals nor free-market enterprises could fund them at the scale and speed required.

·         The number one greenhouse gas generator is the burning of coal, which produces more CO2 than equivalent amounts of oil or natural gas (the “cleanest” fossil fuel). Unfortunately, coal is the cheapest way to generate energy. Most U.S. electrical energy comes from coal, which also produces large amounts of sulfur dioxides and nitrogen oxides.  Ironically, these aerosols mask (temporarily) some of the consequent warming effects, although the acid rain also kills vegetation. (So the idea of using rechargeable electric cars is bad, if it means burning coal instead of gasoline to provide their energy.) China also depends on coal to expand its economy, and is currently adding a new plant every month. However, China is also becoming a world leader in the production of clean, high efficiency coal plants11 and other green technologies. It seems clear that China's rapid expansion of coal-burning plants is a stopgap measure, and that they fully intend to become the future world leader in green energy. Economic incentives will be the only reliable way to force the rest of the world off its coal-dependency. This might entail taxing emissions, to provide the incentive to develop and expand cleaner sources, e.g. nuclear, wind, biofuels (but not corn-based ethanol), and solar, and better capturing of emissions. Unfortunately, such efforts are typically labeled as “job-killing energy taxes” by our politicians. An alternative is to subsidize greener fuels, until development can make them competitive. The typical political label for this is “wasteful government spending”. Our U.S. politicians offer no alternatives, and ignore the immediacy of the problem.

·         Climate “engineering” is cautiously being examined in the technical community, as a way to live with high atmospheric CO2 levels. Typically, these schemes involve large-scale changes to the oceans or atmosphere, to alleviate some of the effects12. Examples of this include “iron fertilization” of oceans, to encourage additional plankton growth that removes CO2 from the air. Also seriously being considered is injection of sulfur aerosols at high altitudes, to scatter more of the incoming radiation. Banks of orbiting mirrors have even been proposed, to redirect radiation. Clearly, some of these are ideas born of desperation, and it is not obvious who would have the authority to implement such risky global experiments. The side-effects of such interventions are largely unknown.

Reflections        
 The International Panel on Climate Change (IPCC) has provided periodic updates on the scientific consensus on the topic since 198813.  Prior to this, there was no organized effort to assemble the data into a form amenable to government action.  At every IPCC update, the case for global warming has been strengthened, and the current consensus is that there is a 90% probability that we will experience warming between 30 and 80F by mid-century14. This represents a higher average temperature than at any time during which homo sapiens has lived. Over 50 international science organizations have publically supported the IPCC conclusions and recommendations, including the American Geophysical Union which represents 20,000 climate scientists. Among climate scientists, over 95% agree that global warming is occurring, and is caused largely by human activities. No scientific organization on record has opposed the IPCC conclusions. There is a small minority of scientists who publically disagree, and many of these are handsomely funded by energy companies. There is always money to be made in climate change denial, especially if backed by plausible research.

In setting up the vast machinery of Homeland Security and two ground wars, VP Dick Cheney famously declared that it was worth the cost and effort, even if there was only a 1% chance of another terrorist attack on the scale of 9/11. In fact, if there was an equivalent of 9/11 attack every year, the lifetime chance for an average American to be affected is about 1 in 130015.  The chances of ending your life in a plane crash are twice as high, at 1:625. Cars are yet more dangerous, with a 1:83 chance of death. With a 90% probability of major climate impacts, the vast majority of which are negative, and a credible chance of catastrophic consequences for millions of people, it is dispiriting to see the lack of government focus. One is tempted to believe that many politicians are in it only for the next election.

At present, four of the five hottest years on record have occurred since 2003.  The recent year 2010 was tied with 2005 as the the hottest ever recorded. Despite yearly fluctuations caused by events such as El Niños and La Niñas, the average baseline temperature is steadily creeping upwards at a rate of about 0.350C per decade21.  This rate is expected to increase as feedback effects from atmospheric CO2, methane, and water vapor levels continue to strengthen.

Epilogue
After the contact between the Titanic and the iceberg, there were no evacuation plans that could stop the ensuing disaster.  It could have been prevented only by being foreseen and avoided. It took a little over two hours for the ship- still carrying most of its passengers- to sink below the surface, and about five more minutes to fall two miles to the ocean floor. Those who survived in the few available lifeboats reported that the screams of the passengers left stranded on the surface took about another hour to subside.

GPR: Jan, 2011


Deconstructing the Apocalypse

A large number of Bible experts agree that we are now in end-times, citing the 1948 reestablishment of a homeland for the Jews as a necessary precursor (as was predicted in Revelations). The final battle at Armageddon- presently thought to refer to a location in the northern mountains of Israel1- has other preconditions that have not yet been established.  In reverse order, these include the following:

1.      Seven years of tribulations for Israel, with the latter half of this period witnessing the amassing of a huge coalition of anti-Israeli forces on their northern border. (Iran, Russia, and several other former Soviet states are thought to be key participants 1.)  Earthquakes, famines, and natural disasters brought forth by God will decimate and terrorize these forces before they can accomplish an invasion to destroy Israel.

2.     The amassing of forces will be preceded by a betrayal of an agreement made by Israel with the Western governing bodies to protect them. The Western coalition will include 10 countries, made up of nations formerly within the boundaries of the Roman Empire. They will be headed by the Anti-Christ, a charismatic politician who will rise to his power by popular acclaim. He will be the chief betrayer, acting in concert with Satan2.

3.     The need for such a “one-world” government will have been driven- in part- by the disarray caused by the Rapture, wherein a large fraction of Christian believers suddenly disappear from Earth3. Especially hard-hit will be Western nations with large Christian populations, such as the U.S.  Those raptured will be spared the subsequent turmoil, then returned after it is all over.

Various Bible experts differ in some details, but most are in broad agreement on the sequence of events described above. The date of the Rapture, which triggers these events, is yet unknown but is thought to be imminent.

The ultimate, divine intercession for Israel is required by God’s former promise to treat them as His chosen people. Why it takes Him so long to act for them is not explained, nor His choice to protect non-Jews (i.e. Christians) from the tribulations by rapturing them instead of His chosen people. Some Biblical experts claim that only those who accept Christ as God will be saved, so the Jews’ well-known lack of respect for the divinity of Jesus does seem to present a problem.  Perhaps that is why they must suffer the tribulations before the final battle that saves them.

Also problematical is the apparent certainty that the above events will occur in the order as foretold. If these forthcoming events are already known to Bible readers, then they are certainly also known by God. One wonders why God would let things get so bad before stepping in, if he a) already knows how this is going to end, b) has the power to intercede whenever He wants, and c) has always loved and wanted to protect his chosen people.

When the Rapture occurs, it certainly will be big news around the world. Such a large-scale disappearance of people is unprecedented, and it will not go unnoticed that they all are Christians. One would think that some non-believers would be aware of Biblical prophecies, even if they didn’t believe them, and that they would start putting two-and-two together. Why they would then continue blindly down the prescribed destructive path for another seven years seems illogical. Unless they have no choice, i.e. no free will to do so, and then one wonders why God would make them victims through no fault of their own. 

Those Christian authors who subscribe most strongly to the certainty of the coming Apocalypse do not address issues about the logic or morality of an all-knowing God who would let it come to this, but instead seem happy that they will be able to observe the horror from the sidelines. If I should personally witness- but do not participate- in a worldwide Rapture, it will probably be too late for me to become a better Christian, although it would certainly convince me to become a stronger believer. Perhaps the best choice for me (and everyone else) would be to then use the available seven more years to study Jewish scriptures and join that faith. Although the Jewish tribulations might be painful, there is at least the promise for eternal salvation for the Chosen People at the end of them.  It might be the only game left in town.

GPR:  Jan-2011

1)   What in the World is Going On?; Dr. David Jeremiah; Thomas Nelson publisher (2008)