by Dr. Ed Berry
Introduction to the AGW problem
Anthropogenic Global Warming (AGW) is the hypothesis that human emissions of CO2 due to burning fossil fuels are causing dangerous global warming.
AGW proponents claim AGW is so dangerous that we must drastically reduce our fossil fuel CO2 emissions in order to stop the disastrous effects of AGW.
Proponents claim the only way to sufficiently reduce human CO2 emissions, and thereby escape the disastrous effects of AGW, is to pass laws that force us to reduce our CO2 emissions. AGW proponents claim we must pass state laws, such as California’s AB32 and Gov. Schwarzenegger’s Western Climate Initiative, national laws, such as Cap and Trade hidden under various clever names, and even international laws, such as the Copenhagen Treaty, in order to force all people and all nations to limit their fossil fuel CO2 emissions.
AGW proponents demand we pass local, city, and county laws to reduce CO2 emissions and they work to enforce such laws under the United Nations International Council for Local Environmental Initiatives (ICLEI), Sustainability, and Smart Growth. The AGW hypothesis has become a de facto assumption in our national laws, and is being enforced through the EPA, USFS, and USFWS to name only a few.
The AGW hypothesis has been used to distort our economic playing field by inserting a requirement to count carbon emissions, sequester CO2, provide tax-payer-funded incentives for wind and solar facilities and their necessary electrical transmission lines, and even tax incentives to buy certain automobiles.
The AGW hypothesis has been used to stop the construction of many new clean coal-electric power plants in America since 2000. If the AGW proponents have their way, the AGW hypothesis will control our lives even more in the future.
Constraining our fossil fuel CO2 emissions does not come for free. The trade off is our standard of living, our economy, and our freedom. Clear decision making requires choosing between frying the planet, if you believe in AGW, or surrendering many of our freedoms, and imposing self-inflicted economic handicaps destined to wreak havoc on the competitiveness of American businesses, sending even more jobs to China and elsewhere. The stakes could hardly be higher.
Therefore, people in every affected country, and perhaps especially in America, should sit down and decide whether or not AGW is a valid scientific hypothesis.
The Scientific Method
AGW Proponents claim AGW is a “scientific” hypothesis (or theory) and that the AGW hypothesis has been proven to be valid. Before we can determine whether AGW is a valid hypothesis, we must determine whether it is a scientific hypothesis. Therefore, we must first review the scientific method.
There is only one basic method common to science. It is called The Scientific Method.
Dr. Albert Einstein, a master of the method, emphasized science must start with facts and end with facts. All theoretical scientific structures fall between facts. We will also call these facts “data.”
Dr. Richard Feynman, Nobel Prize winner in Physics, described the scientific method this way (“The Character of Natural Law”, The MIT Press, 1965, p. 156.):
“In general, we look for a new law by the following process. First, we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation to see if it works.
“If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what your name is—if it disagrees with experiment it is wrong.”
Assume you are a scientist.
Your are first an observer. Then your look for some generality, or hypothesis, to explain the facts or data you observe and what you expect to see in the future. Using your hypothesis, you make a prediction about facts unknown. Finally, you check your prediction against new data.
As a scientist, you hold your hypothesis tentatively. If a prediction is not confirmed by the new data, then you abandon your hypothesis. This is an absolute requirement of the scientific method.
Data, or facts, are always the basis of the scientific method.
- We begin with Data.
- On the basis of Data, we guess a hypothesis. This is our Idea that we hold tentatively subject to test.
- To test our hypothesis, we use it to make a Prediction.
- Finally, we test our prediction against new Data (#4). We do not test our prediction against our original Data (#1) because that would be circular and not a valid test.
If new Data shows our prediction to be valid, then we hold our hypothesis tentatively and make new predictions and tests.
In the diagram, Data (#1) and Data (#4) are the realm of experiment. Idea and Prediction are the realm of mathematics.
The process of going from Data to Idea is called “Induction.”
The discipline of Statistics assists in performing induction, that is creating a hypothesis from facts. However, this statistical induction does not prove the hypothesis is valid. The hypothesis must still be tested using the scientific method.
The process of going from Idea to Prediction is called “Deduction.”
Deduction takes us from general to specific. This is the realm of Mathematics and Probability.
Climate models are a means of going from Idea to Prediction. They calculate predictions of hypotheses.
The process of going from Prediction to Data is called “Verification.”
This is where every hypothesis must meet the road. Statistics assists in performing verification.
The scientific method requires looping through these four steps until a hypothesis is found with a successful record of making valid predictions. Science then calls such a hypothesis a “Theory” such as Einstein’s Theory of Relativity, or even a “Law” such as Newton’s Law of Gravity.
According to the scientific method, a hypothesis cannot validate itself. It can only be validated or invalidated by new data. When you read a climate model “proves” global warming, you will realize such a statement is non-scientific. A model or a hypothesis cannot “prove” anything. But data can invalidate a hypothesis or model.
Einstein described the “Key” to science well when he said:
“The case is never closed.”
“Many experiments may prove me right but it takes only one to prove me wrong.”
A valid scientific hypothesis must be falsifiable.
Falsifiable means there must be some experiment or possible discovery that could prove the hypothesis untrue.
For example, Einstein’s theory of Relativity made predictions testable by experiments. These experiments could have produced results that contradicted Einstein. While Einstein’s theory has stood the test of time, so far, the point is that his theory is constructed to be falsifiable.
By contrast, a hypothesis that your home is populated by little green men who can read your mind and hide or turn invisible whenever anyone looks for them, is not falsifiable: these “little green men” are “designed” so no one can ever see them.
On the other hand, a hypothesis that “there are no little green men in your home” is scientific: you can disprove it by catching one. Similar arguments apply to abominable snowmen, UFOs and the Loch Ness Monster.
This method of falsifying a negative to prove a theory was discovered in mathematics in the 1820’s.
The following chart shows necessary qualities of a Scientific Theory.
Dr. Ed: Great start lets see if Dr. Eric can start filling in the blanks with Fact no fairy-tales. As most of the AGW arguments are based on Circumstantial evidence and not experimental data, this should be fun. Another point is that much of the supposed evidence of AGW like data from NOAA ,NASA, And East Angela has been torn apart by resent events as Climategate, and Satellite-gate and the report by IAC on IPCC. And the evidence of very cold winters to come., we may need some real global warming.
I would hope that DR. Eric would start with the 1824 work of Fourier,1864 work of Tyndall and the 1896 work of Arrhenius. As none of these good physicists provided any experimental results that “proved the “greenhouse gas effect” It is still a Hypotheses.
I will follow the flow of this debate and add in as it seems appropriate.
Environmental engineer with 47+ years of learning and practicing the Scientific Method.
You are absolutely correct to return to the scientific method (the epistemology, in philosopher's terms) for a proper start of the discussion. And pleased I am indeed to find your emphasis on the need for an inductive method. That is, to start from factual observations and proceed with hypotheses, after some while to return again back to data for the proof of the pudding.
But one addition perhaps to your brief statement: the hypothesis itself cannot be arbitrary, it should itself rest somehow on data, i.e. it must be made plausible in the first place. For example, a physical mechanism in combination with a temporal possibly causal correlation (e.g. data obtained from ice sheets). One thing that will not do is to base it solely on a metaphor (e.g. greenhouse).
I am very curious to learn what exactly this first necessary step will yield for AGW to make it plausible – or not. I propose you aim to insist on getting the basics right before proceeding any further.
My compliments by the way for your initiative, to both of you, this is really an unprecedented (sadly so) but necessary event!
Here are some of my initial responses to your “2. The Scientific Method”
Under “Introduction to the AGW problem”:
Under “The Scientific Method”:
Ed, most of your explanation of the scientific method is very well known and can be retrieved by doing a Google search of the internet (perhaps one would have to punch in Einstein to get that nice photo of him). A problem I have with your coverage of the scientific method, however, it that is so incomplete and simplistic. Let me explain.
If one is dealing with very specific and isolated question, such as Einstein’s theory that light beams can be bent by the gravitational field of an object, that specific theory can be directly tested by a definitive experiments. This has, in fact, been done many times since by observations of starlight being bent around the sun during solar eclipses.
If one is dealing with very complex and multivariable systems, however, such as a suspected detrimental effect of some pollutant on some aspect of the environment, the testing of theories associate with that effect become much more difficult to test. The best one can then do is measure as many of the related variables as one can and try to assess the relative probabilities that the theory is either correct, partially correct, or not at all correct. In that process, one might reach a point at which one decides that the probability of some specific unwanted outcome is sufficient do something about it long before the feared outcome actually occurs.
Therefore, Ed , if you are suggesting that we should not be concerned about a given environmental possibility simply because one can find one observation concerning one variable that could possibly be used to argue against the larger issue, you do not understand how the scientific assessments of complex issues are done. For example, just because the temperatures measured at the airport of Edmonton Alberta were unusually low last winter does not refute the concept of AGW.
Under “As I understand them, below are your claims. Is this correct? If not, please revise as necessary:”
1. If Base CO2 is defined to be 285 ppm that definition applies to the level reached during the last interglacial period, called the Holocene, which began only about 12,00o years ago and persisted until the Industrial Revolution. In one or two of earlier interglacial periods of the last 750,000 years, CO2 reached slightly higher levels, but never greater than 300 ppm.
2. The two main contributions to the 35% increase in CO2 since 1850 are thought to be the combustion of fossil fuels and changes in land use (deforestration), in that order of importance.
3. Today, atmospheric CO2 is 392 ppm and increasing at a rate of about 2 ppm per year. At that rate, CO2 in 2020 would become about 414 ppm. Relative to the Base level of 285 prior to the Industrial Revolution, that would constitute an increase of 45%. If the rate of CO2 increase is further increased, due to increased industrial activities in the underdeveloped countries of the world, then the increase by 2020 will be closer to 50%.
4. Correct as stated.
5. Correct as stated.
6. We are already in trouble with 392 ppm CO2 – due to the delayed heating effects of the excess CO2 and because the excess CO2 levels will last much longer than required for that delayed effect to kick in.
7. The next decade will be hotter than the last one unless one of the other known cooling effects (such as increased particulate matter) is intentionally or naturally (several huge volcanoes such as Pintatubo in 1991, for example) happens to overwhelm the warming effect of the GHG’s.
8. Yes, with same possible exceptions as listed in 7.
9. In that case, the Earth will be in distinctly worse shape than today because of the impending changes that increased CO2 level will cause. By then, physical changes are likely to have already caused great difficulties. For example, it will be increasingly difficult for the world to adjust to our increasing sea levels since we have become accustomed to the present levels that have existed since civilizations first developed along the countless coastlines of the world.
10. Correct as stated.
11. It is already to late to “avoid” the threat’s impact. So what I am saying is the sooner we stop CO2 emissions, the better.
Concerning the rest of your verbiage under “Finally …” and “Furthermore …”,
I will think about your suggestions here a bit more before responding.
In the meantime, Ed, it has occurred to me that you have not stated anything yet that you believe with respect to the science of atmospheric CO2. For example, many of us that have worked in the field of atmospheric chemistry for many years believe that CO2 is, at the very least, a very important green house gas. Would be so bold as to share with us your opinion concerning the importance of CO2 in affecting the Earth’s temperature. And in doing so, you can skip all the verbiage that would be required to be in rigid compliance with your recommendations listed in “Finally…).
Also, you don’t have to provide extensive evidence behind any of your initial statements (as you suggested I should in “Furthermore….). As you know such an exercise would require many pages of references and associated discussions of those references. Just some plain statements describing your view of the effect of CO2 on temperature would do for starters – in my opinion. The general public should be able to understand that type of exchange without investing hours of effort on their parts, as well. Later, when we identify central points of importance over which we clearly disagree, then it might be appropriate to go to the literature for further clarifications of those points.
To be more direct, my understanding of this debate was that you and I are experienced atmospheric scientists whose direct exchanges based on our collective and extensive knowledge in this area was what the public might want to observe – rather then hundreds of pages of stuff we both can extract from the literature as well as internet bloggery.
As part of our base standard I'd like to suggest that the term "greenhouse gasses "be eliminated and the term IR absorbing gases (IRag) be substituted. Later on I'll be proving that the 'Greenhouse gas" phenomena is a total error and has no place in a truly scientific discussion.
Just as a start- Water which is found in all three phases of mater in the atmosphere can not and should not be only refereed to as a "greenhouse gas" . At ambient temperature water as a vapor has the properties of a gas thus when it absorbs IR or other wave lengths of light radiation it does not heat up ,it follows the Bohr model. When water as a liquid is in the air the liquid does heat when it absorbs visible light and IR radiation. Obviously clouds are in a constant state of change as the temperatures within the clouds change forming water droplets and ice crystals or heating to have significant amounts of vapor. Again any solid water in the Clouds can absorb visible light and IR and UV thus "heating". The energy exchange during these processes are significant and are occurring for the major part of the day. More on this later.
Now to compare CO2, CH4, or any other of the IRag which are almost below detectable limits to the weather effects of water is asinine. Water absorbs UV ,visible light and IR ,the range of IR is from about 3nm to about 30nm thus overlapping the 3 narrow bands for CO2 and the IR absorption of CH4.
Lets discuss CH4 for just a few words, CH4 is lighter than air therefore it rises in the atmosphere unlike CO2 which is heaver that air. As CH4 rises from oceans,lakes, marches, swamps rice paddies cows,sheep and places that have anaerobic digestion occurring, it interacts with O3 (ozone) and is oxidized to water vapor and CO2. The oxidation produces heat however because this is occurring along the path as the CH4 rises there is no measurable zone where the temperature has been detected to increase. It is a guess that most of the oxidation is occurring in the upper atmosphere at the border to the ozone layer and heat dissipates into space almost immediately. It is known that the concentration of CH4 in the lower atmosphere is in the single digit parts per billion range.
We are continually told by the AGW crowd that CH4 has anywhere from 23 to 70 time the “ghg” effect as CO2. They can not prove the “ghg” effect where is the prove of the higher “ghg” effect of CH4 or any other IRag's. Just because a gas absorbs more IR of different wave lengths does not prove it causes a greater effect. Where is the data?
I have lived in the Great Lakes region for 71 years, I was told by my father that in 1924 it snowed on July 4th. When my son was born in Cleveland on July 5 ,1972 it was cold enough to snow. Where is the climate change. Obviously the AGW group has not bothered to define "climate or climate change" Therefore here is a definition -can you prove me wrong?
Definitions of the Climate Discussion
What is Climate?
Definition:Many thousand weather days end to end for a specific location.
How many climates are there in the world?
Every part of the country and the world has a unique climate -the south of France, the North slope of Alaska, the heart of Africa, the northeast Great Lakes region of the US ,the north of Italy, the south of Italy,thousands of different climates etc.
What is weather?
The atmospheric conditions where you are.
Can mankind control the weather?
We have tried for thousands of years from the Indian rainmaker, to the cloud seeders of the 1950-60. Man can not control the weather, then how the hell can man be controlling the climate. This whole B.S of MANN-made global warming is a fairy tale. The MANNipulation of temperature data is a crime against humanity and these criminals should be put in jail.
AS there is no prove that the "ghg" effect exist where is the panic- its only in the mind of the henny pennies that the Sky is falling down!!!!!!!!
It time that Dr. Eric starts looking at some facts look at http://www.climatedepot.com for ten minutes and if he doesn't start to question his position I'd suggest it time to see his doctor. Repeating the word of Albert Einstein " The only thing worse than ignorance ,is arrogance!
I am glad that a) the one misplaced joke is gone and that b) Berthold will return if he is feeling better (B: this unrestrained expression is not the intention of this discussion, which ought to remain civilised, though your point, the definition of climate and 'temperature' is relevant here).
Dr. Eric, your sneak preview of your next statement points at an important issue, the evidence on which to base a valid hypothesis concerning a complex multivariate system. Now we live in an age where 'complexity worship' seems to prevail and one might be tempted to use the complexity argument to obtain a waiver the normal need for proper cause and effect reasoning towards a valid (non-arbitrary) hypothesis. I'm confident you will not.
Some key passages that struck me in this respect are: 1) "… try to assess the relative probabilities" (one should be very cautious about using statistics as departure from the scientific method is all too easy), 2) you see "strange behaviour" of 'temperature' after 1850, based on ice core and fingerprint data.
Among all the subjects waiting to be discussed (all in due time!) I propose that this is the thing the discussion centers around in this stage.
For all the misplaced things Berthold may have said in his last contribution, I do like his earlier suggestion to use the term IRag rather than the insinuating term GHG, or would that be too much to ask?
Greeting from 4m below the bottom of the sea (The Netherlands 🙂 ).
I would like to suggest an approach that might help to bring a bit more order to the debate. Various claims touching many, many issues in the AGW controversy are proliferating greatly. My suggestion is that we all focus on a narrow set of tightly related issues/questions relating to the AGW theory, one set at a time, and further, that commentators try to limit their questions and remarks to the set at hand. In order to evaluate the AGW theory, we need to examine its elements and know what they are and what if any clear evidentiary support they currently have. We may reach something of an impasse on some issues in the set and have to move on to the next set of issues, but we can always come back.
For example, Dr. Grimsrud has asserted that CO2 is a greenhouse gas and perhaps causally the most important one. Cyril Wentzel [Comment of 10/16 – 1:55 pm] noted that there is broad agreement that the direct (or "no feedbacks") effect of doubling CO2 would be an atmospheric dT of 1*C. The IPCC 4AR/WG1 [IPCC Fourth Assessment Report, Contribution of Working Group 1 (2007)/"The Physical Science Basis"] claims that the the radiative "forcing" of an instantaneous doubling of atmospheric CO2 would be 3.7 Wm^2. This is said to be the base or no-feedbacks Climate Sensitivity, CS(0). What does all this mean? In particular,
1) What is a "radiative forcing"?/ Explain Radiative/convective top-of-the atmosphere (or TOA) equilibrium concept.
2) What is the difference between a "forcing" and a "feedback"?
3) How is this forcing effect of 3.7 Wm^2 calculated for a doubling of CO2 and what evidence or physics supports the calculation? [Here we could discuss the IPCC formula: 5.35 x ln(CO2(1)/CO2(0)) = X Wm^2, where "ln" is the natural log function, CO2(0) is the base or "pre-industrial" (or whatever you want to start from)CO2 level in ppmv, CO2(1) is the new or changed level of CO2 concentration in the atmosphere, and 5.35 is an amplification factor (which factor has changed in value in the IPCC reports).
4) What calculation and data allow us to convert a forcing of X Wm^2 into a change in globally averaged surface temperature? (Note: The IPCC concepts of both the base ("no feedbacks") and the full ("with feedbacks") Climate Sentivity is "referred to" changes in globally averaged surface temperature. Indeed, both CS(0) and CS are given in terms of dT(s) [=change of surface temperature].)
5) What is the "bare Planck Response" or "Planck function" with respect to an increase in temperature? Temperature of what? [This Planck Response is said to be the most basic of feedbacks. Indeed, it is a negative feedback.] And how is the Planck Response related to the CS concept?
6) What is the inverse of the Planck Response and how is that related conceptually to the base/no-feedbacks climate sensitivity, CS(0)?
7) How are the forcings for the other long-lived green house gases (or LLGHGs) determined? In generally the same way as for CO2?
The set 1) through 7) is meant to be am example of "tightly related" set of issues/concepts/questions, etc. For a second set, we might go on to consider whether and how well CO2 and temperature have been correlated (from the present all the way back to the Cambrian if one likes), the the forms and magnitudes of natural climate variation, and take a stab at the pro-AGW attribution argument, something like: Premise 1: Climate models (in pre-industrial and non-forced 20th Century control runs (and supported by well-confirmed empirical studies of natural climate variability (here e.g., hockey-stick studies) handle natual climate variability (very well, adequately, etc.). Premise 2: Climate models cannot reproduce the observed temperature variations since 1750, 1850, 1900, 1950 – pick your starting point – without introducing the correct estimations of anthropogenic GHGs into the model. Premise 3: These GHG emissions are responsible (in the model) for the bulk/most/etc. of the warming since______. Premise 4: There are very likely no unknown climate variables not adequately represented in the models. Therefore, the bulk/most/etc. of the warming since ______ has been caused by anthropogenic GHG emissions.
This approach, or something close to it, will enable us to focus our efforts and for those who don't know much about the climate change issue, but who are still members of the "jury", they will have a better chance to really learn something about it, more than facing a welter of randomly introduced and often remotely related claims, or overwhelming our principals, Dr. Berry and Dr. Grimsrud, with them.
Finally, and as at least one commenter has already suggested, I think we should be a bit kinder in our rhetoric to Dr. Grimsrud (and to any other AGW proponents who show up).
To Berthold Klein:
1) Molecules have such low mass that gravitational separation of different gases in Earths atmosphere is very small (but is finite). While CO2 is not fully mixed due to lag time when sources and sinks vary over time (especially Northern to Southern hemisphere), the mixed variation is small enough to not make an issue of.
2) Optically absorbing gases such as water vapor and CO2 do not make a greenhouse effect the same way as in greenhouses. However, they do reduce the magnitude of net radiative transfer from the ground, and thus act as a radiation partial insulator. The remaining (and dominant) energy transfer from ground level to the upper atmosphere is convective (air, and evaporating water vapor which condenses at some altitude to release energy in the atmosphere). This results in the average location of radiation to space occurring at a significant altitude rather than directly from the ground. The altitude at which the effective radiation to space occurs sets the effective temperature at that altitude. The convective atmospheric mixing establishes a lapse rate which is independent from the radiation effect, and dependent on gravity and the specific heat of the atmosphere, somewhat modified from the dry air value by phase change of water vapor (wet lapse rate). The temperature at the effective radiation out altitude plus the lapse rate time altitude determines the average ground level atmospheric temperature. It is clear that the only effect of changing CO2 concentration level would be a small change in effective outgoing radiation altitude. Thus CO2 is in effect an atmospheric greenhouse gas in that it can increase ground level temperature. The effect for Venus is huge, but for Earth it is small (about 30C), and for the Earth, the CO2 is third in line after water vapor and clouds.
Your Venus comparison is fascinating. But I recently have challenged (in my own mind) the assertion that the hot ground level found on Venus is entirely due to captured solar electromagnetic radiation. instead, I propose that vulcanism may be much more prevalent there than for planet Earth. Consider that it is likely that our major solar system planets were accreted from a major reconstruction after the preceding star supernova event; the heavier atoms being concentrated nearer to the position center of that event. Thereupon the much greater concentration of the heaviest atoms which also carry the greatest amount of residual radioactive decay which today fuels the vulcanism observed on most to all major planets. Therefore to attempt to build a present-day CO2 albedo model based primarily on Venus preliminary observations may be (is) folly.
Instead I posit that some baseline experiments be immediately launched (likely they are already in progress in some obscure university and industrial basements) where the CO2 absorptivity CO2 gas for solar radiation (using local broadband "light" sources) at scale path lengths and pressures. Their rationale may be paraphrased as follows:
The concentration of CO2 is now about 350 ppm, or 1/3000th of an atmosphere, which totals 15 psi weight. Thus the weight of CO2 here is 15/3000 or o.005 psi or o.72 pounds per square foot, or 7.75 pounds per square meter, or 3.5 kg/m^2. Air is 1.2 kg per cubic meter, the molecular weight of air is on the average about 28 vs 44 for CO2, so a cubic meter or CO2 at atmospheric pressure weighs about 1.9 kg. Increasing the pressure of the CO2 to 1.8 atmospheres to 27 psia (12 psi gauge) in a modest 1m long tube ( any diameter, 10 to 20cm diameter seems like a convenient range, will provide a convenient specimen of gaseous CO2 whose transmittance, absorptivity and heatup (calorimetry) can be readily diagnosed in our time. Or a tube 1.8m long at normal sea level pressure will also do. One can also make a partial vacuum in that tube to simulate the higher altitude environment (evaluate low pressure un-broadening of CO2 absorption lines), and finally one can chill that evacuated tube to evaluate stratospheric cool temperature narrowing of same. Thereby we shall obtain the albedo quantites needed to finish our earth albedo CO2 variation model. This should have been done ages ago at Penn State, my Alma mater, if they had their thinking caps on. Both infrared technology and meteorology have been generations-long interests there.
There is no need to postulate vulcanism as a heat source for Venus. There are two requirements for solar input to an atmosphere to result in the ground temperature. One is the presence of enough atmospheric greenhouse gases or clouds to greatly reduce the radiation flux through the atmosphere, until it reaches a great enough altitude to radiate to space. The second is the physical properties of the atmosphere (height and specific heat of its gas composition). The atmosphere of Venus contains mostly atmospheric greenhouse gases (CO2 at 93 times the mass of Earth's atmosphere), so has a very low radiation heat transfer, thus the radiation to space occurs at a very high altitude (about 50 or more km). The exact amount of the atmospheric greenhouse gas is not very important, as long as there is enough to move the location of the outgoing radiation to a high altitude. The height of the atmosphere is also critical. The lapse rate (temperature drop rate due to adiabatic expansion with increasing altitude) of the atmosphere of Venus is (as accurately as has been measured) the value that is directly calculated from the gravity level and specific heat. Radiation does not play a significant part in the lapse rate in the atmosphere except at the higher altitudes. Using the calculated temperature at the effective location of outgoing radiation and adding the lapse rate time altitude and you get the measured surface temperature. If straightforward physics completely explains the measured value, why do you need to invent a new and unlikely process? By the way, even if Earth had 10 times as much CO2 as it does, the ground temperature would not change much, since the altitude of outgoing radiation would be only slightly larger (due to far lower mass of Earth's atmosphere than that of Venus). Also the specific heat would not change significantly. Thus the temperature would change very little. In fact, due to the presence of oceans and water vapor, there probably would be negative feedback, to also limit any change.
Leonard, your description seems similar to the one by Chilingar et al: http://www.informaworld.com/smpp/content~db=all~c…
Or are you saying something different?
I am saying something different. His paper talks about CO2 causing cooling, I say it causes heating. The outgoing radiation has to match the absorbed solar radiation (except when the temperature is changing on the average, and then storage is present to unbalance the result). The outgoing radiation level sets an effective temperature, but at the altitude where it is (on average) located. The lapse rate (which is =-g/Cp), and which naturally occurs in a well mixed atmosphere, does the rest. If the outgoing radiation was all from the surface, the temperature is calculated. If from an altitude larger than the surface, the adiabatic lapse rate time altitude has to be added to that temperature, and is thus larger. The higher the altitude, the greater the added term (as on Venus). The analysis is actually much more complex due to day/night, seasons, and latitude variation, and albedo variation, but the basic process is as I stated.
Here is a summary quote:
"Accumulation of large amounts of carbon dioxide in the atmosphere leads to the cooling, and not to warming of climate, as the proponents of traditional anthropogenic global warming theory believe (Aeschbach-Hertig, 2006). This conclusion has a simple physical explanation: when the infrared radiation is absorbed by the molecules of greenhouse gases, its energy is transformed into thermal expansion of air, which causes convective fluxes of air masses restoring the adiabatic distribution of temperature in the troposphere."
I just noticed your reply to Anthony Bowler. I can clearly say the authors are in error. Radiation absorption and heating may cause expansion, but this just contributes to the mixing. In all real cases, LTE is a good approximation, so that is not an issue. The lapse rate will establish at -g/Cp as long as any process mixes the atmosphere well enough. However, the lapse rate is not a level, it is a slope (degrees C/km). The level is set by forcing a particular level in the atmosphere (or ground) to be at a specific temperature. If the ground radiated mostly directly to space (small atmospheric greenhouse gas level) the surface balance of absorbed and radiated energy determines the surface temperature, and the temperature drops with increasing altitude at the adiabatic lapse rate.
Atmospheres that have a reasonably large amount of atmospheric greenhouse gases (like Venus and Earth) absorb thermal radiation and re radiate it in all direction. They thus act like radiation insulation, decreasing the radiation heat transfer. In that case, convective heat transfer dominates, and the radiation leaves the atmosphere from a high altitude. The effective location of radiation leaving (this is actually more complicated due to this being a volume effect from a gas) sets the temperature at that altitude. Now the temperature increases below that altitude due to the adiabatic lapse rate, and would decrease above that altitude due to a combination of the adiabatic lapse rate and energy loss from radiation out.
First, I want to thank you for your effort to give an honest opinion on AGW. I started out as accepting the idea, and I still agree there is some effect. However the evidence I obtained with over a decade of significant reading of the literature and discussions made me skeptical of the claim of significant effect. I am convinced the next approximately 20 years will be generally cooling, which will put the whole argument to rest. I am a senior research fellow at the National Institute of Aerospace (and all comments are my own), and formerly NASA, with degrees in Physics and Aerospace Engineering, with a ScD. There is one point I want to make regarding your claim that the CO2 will be present for many hundred years or so. There have been many studies showing that in fact a time constant of 7 to 15 years is actually the correct one. You don’t need to remove all the CO2, just most of it, so using 10 or more time constants is misleading. There are two specific ways to see the shorter time constant is correct. One was the dwell time of radioisotopes of atmospheric Carbon after nuclear tests, and the other follows directly from seasonal variation of atmospheric CO2 (look at the maximum slopes of the variation over yearly cycles and use that slope to calculate a time constant).
Thank you for your precisely stated comments concerning the lifetime of CO2 in the atmosphere. I think I can clear it up – we are both right but are talking about two different things.
What I believe you are referring to is the lifetime of an averageo CO2 molecule in the atmosphere against being removed by either dissolution into the oceans or lakes and being absorbed by plants through photosynthesis. The value you suggest of about 7 to 15 years sounds reasonable and, as you said, a good way to determine that type of lifetime would be by radioisotope measurements after an nuclear test.
Now, its is important to recognize that the type of CO2 lifetime I am referring to is an entirely different one. It is an expression of the time required for the EXCESS CO2 above the normal natural levels of CO2 to be removed from the atmosphere. It is this process that takes so long – even though individual CO2 molecules are indeed entering and leaving the atmosphere more quickly.
The type of lifetime you referred is the type we generally use in expressing the atmosphere lifetime of common pollutant such as nitrous oxide, or the chlorofluorocarbons, for examples. For the case of CO2, however, we need this second type of residence time for the EXCESS because, of course, CO2 is indeed entering and leaving the atmospheric due to nature processes.
Please let me know if this explanation has help to clear up this important point.
I am aware of the different time constant points you made. However, CO2 is CO2 whatever the source. If the radioisotope of Carbon is removed by plants, the plants die and rot and release the CO2. If the gas dissolves in the ocean, it later is just as likely to out-gas as other CO2. There is no reason for 2 separate time constants. If the human contribution and deforesting went to zero, but all natural sources continued, the biosphere, oceans, and rock erosion would remove about 1 time constant of the difference above the natural balance in 7 to 15 years. It would be removed in exactly the same way as the radioisotope was (by being trapped in bio material that is effectively sequestered, and sea shells and dead sea life dropping to the sea floor, and rock erosion). I have seen no supportable argument that shows otherwise. If you have an analysis or reference you can give that is specific, I would appreciate it. All of the so called supporting arguments I have seen are hand waving.
In the past, evidence shows that the temperature change caused the CO2 change, with a lag of about 800 years. Using the argument that CO2 caused previous hot periods is totally without convincing support (the lag but then sudden positive feedback is totally without support). I agree that some or possibly most of the present excess level of CO2 is human caused, unlike the past. If the present case is different, there is still no supporting evidence that there is positive feedback from CO2 induced increase of water vapor, and in fact some indication of negative feedback. CO2 is an atmospheric greenhouse gas, but of relatively small consequence.
While I certainly have made errors before and possibly am again, I do not think I am in this discussion of the two different ways of expressing CO2 lifetimes.
If atmospheric CO2 gets isotopically tagged by a nuclear bomb test, those tagged molecules will dissolve in the ocean and mix with its tremendous amount of natural CO2 (lets forgot the lesser amount that goes into plant in this discussion). Now if own is measureing the tagged molecules left in the air after the bomb test, one with get pretty good measurement of the lifetime of individual CO2 molecules. The rate of emission of those tagged molecules back into the atmosphere will be essentially zero because almost all of those ocean emitted CO2 molecules will be normal non-tagged molecules. This will lead to a lifetime of about 7 to 15 years.
Now if instead we could somehow instantly put a big extra plume of CO2 into the atmosphere so that we suddenly have say 35% more than a moment ago, the rate at which this extra CO2 will disappear will not be determined (as in the example above) simply by the rate at which CO2 dissolves in the ocean. This is because about to same amount is alway being emitted by the ocean's surface layers. Therefore, the rate at which the EXTRA CO2 disappears is detemined by the rate at which it remove entirely from the BC world and transferred to the GC world and, of course, that takes along time.
Concerning your additional comments concerning the 800 year delay of CO2 after temp changes derive from the ice core record, I will be please to share my understanding of that fact a bit later -after I get some needed sleep.
In the meantime, thanks for your interest and interesting / educational commments.
Leonard, I found a good reference in which these various expressions of CO2's lifetimes are explained – along the same lines as I did above. It is http://geosci.uchicago.edu/~archer/reprints/arche…
I agree 7 to 15 years is the wrong time constant to use for a CO2 pulse. The removal of about half of the input gives the following:
"Dividing the inventory by the flux yields an apparent lifetime of 50 to 100 years, depending on whether the terrestrial uptake is counted in addition to the oceanic uptake. This type of calculation has been most recently presented by Jacobson (2005), who determined an atmospheric lifetime of 30 to 95 years. For the nonlinear CO2 uptake kinetics, as predicted by carbon cycle models, however, this apparent lifetime would increase with time after the CO2 is released. Some CO2 from the release would remain in the atmosphere thousands of years into the future."
This implies a time constant of 30 to 100 years. I also agree that "some" CO2 would remain for many time constants. That is how asymptotic processes work. However, one time constant removes most of the excess, and is the appropriate one to use.
A separate but critical issue is the CO2 heating effect. The suggested paper assumes that CO2 and methane are major causes of global warming. Where is the warming? Also the present holocene is about 11,000 years old, and we likely will be tending toward a glacial period. Yet the paper talks about thousands of years of effect. I hope so.
A major point that the paper does not address has to do with the ocean circulation. Much of the surface ocean CO2 concentration increase is removed by the thermohaline circulation in relatively short time periods (order 100 years or less). Since the solubility of colder water is far greater than warm, this is particularly effective on removing excess CO2. This takes excess dissolved CO2 out of play into the deep ocean for the order of 1000 years or more (deep circulation return time), and mixes it with a far greater quantity of sea water to reduce the concentration. This effect will almost surely cause the increasing CO2 concentration to slow down as we approach the time constant of this effect. It also would purge the excess of a pulse in a fairly short additional time over the direct biological, solubility, and rock weathering effects.
Leonard, Yes, the very slow circulation to the depths of the ocean is indeed a huge issue. This is why Arrhenius thought it would take about 3000 years for CO2 to double. We now know that mixing of the surface layers throughout the ocean depths takes hundred or thousands of years and, therefore, the doubling point will be reached within this century. Because he lived in cold Sweden, his erroneous conclusion was a disappointment to him.
The CO2 mixes in the upper small fraction of the oceans (about 0.1) over reasonable time scales (up to several decades). Much of this layer is carried by currents to high latitudes and taken to the depths typically in less than 100 years. It is then lost to the surface for 1000 years or more, and is mixed to a much lower concentration in the depths. Since the ocean has almost 50 times the dissolved carbon as the atmospheric carbon level, well mixing would only result in a maximum dissolved carbon level increase of just over 2 percent from removing 100 percent of the present quantity of atmospheric CO2. The re balancing between types of dissolved Carbon is assured from the excess Ca ions available. Thus if the CO2 doubled in a pulse and half of that eventually was dissolved (even ignoring land plants, shell formation, and rock erosion removal), the water concentration of CO2 that resurfaced would only be very slightly increased, and the increased partial pressure return to the atmosphere would have a negligible effect on a new equilibrium near the "base" level. Meanwhile this would effectively totally remove all of the excess CO2 from a pulse in a time constant of say 100 years. There is no thousands of years mechanism. In addition, there is no significant ocean acidification problem likely due to the removal of the surface layer in this time constant.
The present trend is more complicated. There is continual new human contribution to the CO2 level, and the majority has been introduced in far less than the 100 (or so) year ocean removal cycle. The recent rise in fact has a 50 percent time constant of about 50 years. I would guess that if the continual addition of CO2 went 50 more years at the same rate as at present, the atmospheric rise rate would level off well below 500 ppmv. If the volume of the addition continually increased over present levels (as is more likely for at least part of the 50 more years), the rise may not level off, but it would likely rise progressively more slowly due to the thermohaline replacement water being CO2 impoverished for the atmospheric partial pressure.
You know, in science, there was once this thing we called the Theory of Multiple Working Hypotheses. Anathema (a formal ecclesiastical curse accompanied by excommunication) in modern climate science. So, in juxtaposition to the hypothesis of future global climate disruption from CO2, a scientist might well consider an antithesis or two in order to maintain ones objectivity.
One such antithesis, which happens to be a long running debate in climate science, concerns the end Holocene. Or just how long the present interglacial will last.
Looking at orbital mechanics and model results, Loutre and Berger (2003) in a landmark paper (meaning a widely quoted and discussed paper) for the time predicted that the current interglacial, the Holocene, might very well last another 50,000 years, particularly if CO2 were factored in. This would make the Holocene the longest lived interglacial since the onset of the Northern Hemisphere Glaciations some 2.8 million years ago. Five of the last 6 interglacials have each lasted about half of a precession cycle. The precession cycle varies from 19-23k years, and we are at the 23kyr part now, making 11,500 years half, which is also the present age of the Holocene. Which is why this discussion has relevance.
But what about that 6th interglacial, the one that wasn't on the half-precessional "clock". That would be MIS-11 (or the Holsteinian) which according to the most recently published estimate may have lasted on the order of 20-22kyrs, with the longest estimate ranging up to 32kyrs.
Loutre and Berger's 2003 paper was soon followed by another landmark paper by Lisieki and Raymo (Oceanography, 2004), an exhaustive look at 57 globally distributed deep Ocean Drilling Project (and other) cores, which stated:
"Recent research has focused on MIS 11 as a possible analog for the present interglacial [e.g., Loutre and Berger, 2003; EPICA community members, 2004] because both occur during times of low eccentricity. The LR04 age model establishes that MIS 11 spans two precession cycles, with 18O values below 3.6o/oo for 20 kyr, from 398-418 ka. In comparison, stages 9 and 5 remained below 3.6o/oo for 13 and 12 kyr, respectively, and the Holocene interglacial has lasted 11 kyr so far. In the LR04 age model, the average LSR of 29 sites is the same from 398-418 ka as from 250-650 ka; consequently, stage 11 is unlikely to be artificially stretched. However, the June 21 insolation minimum at 65N during MIS 11 is only 489 W/m2, much less pronounced than the present minimum of 474 W/m2. In addition, current insolation values are not predicted to return to the high values of late MIS 11 for another 65 kyr. We propose that this effectively precludes a 'double precession-cycle' interglacial [e.g., Raymo, 1997] in the Holocene without human influence."
To bring this discussion up to date, Tzedakis, in perhaps the most open peer review process currently being practised in the world today (The European Geosciences Union website Climate of the Past Discussions) published a quite thorough examination of the state of the science related to the two most recent interglacials, which like the present one, the Holocene (or MIS-1) is compared to MIS-19 and MIS-11. The other two interglacials which have occurred since the Mid Pleistocene Transition (MPT) also occurred at eccentricity minimums. Since its initial publication in 2009, and its republication after the open online peer review process again in march of this year, this paper is now also considered a landmark review of the state of paleoclimate science. In it he also considers Ruddiman's Early Anthropogenic Hypothesis, with Rudddiman a part of the online review. Tzedakis' concluding remarks are enlightening:
"On balance, what emerges is that projections on the natural duration of the current interglacial depend on the choice of analogue, while corroboration or refutation of the “early anthropogenic hypothesis” on the basis of comparisons with earlier interglacials remains irritatingly inconclusive."
As we move further towards the construction of the antithetic argument, we will take a closer look at the post-MPT end interglacials and the last glacial for some clues.
An astute reader might have gleaned that even on things which have happened, the science is not that particularly well settled. Which makes consideration of the science being settled on things which have not yet happened dubious at best.
Higher resolution proxy studies from many parts of the planet suggest that the end interglacials may be quite the wild climate ride from the perspective of global climate disruption.
Boettger, et al (Quaternary International 207  137–144) abstract it:
"In terrestrial records from Central and Eastern Europe the end of the Last Interglacial seems to be characterized by evident climatic and environmental instabilities recorded by geochemical and vegetation indicators. The transition (MIS 5e/5d) from the Last Interglacial (Eemian, Mikulino) to the Early Last Glacial (Early Weichselian, Early Valdai) is marked by at least two warming events as observed in geochemical data on the lake sediment profiles of Central (Gro¨bern, Neumark–Nord, Klinge) and of Eastern Europe (Ples). Results of palynological studies of all these sequences indicate simultaneously a strong increase of environmental oscillations during the very end of the Last Interglacial and the beginning of the Last Glaciation. This paper discusses possible correlations of these events between regions in Central and Eastern Europe. The pronounced climate and environment instability during the interglacial/glacial transition could be consistent with the assumption that it is about a natural phenomenon, characteristic for transitional stages. Taking into consideration that currently observed ‘‘human-induced’’ global warming coincides with the natural trend to cooling, the study of such transitional stages is important for understanding the underlying processes of the climate changes."
Hearty and Neumann (Quaternary Science Reviews 20  1881–1895) abstracting their work in the Bahamas state:
"The geology ofthe Last Interglaciation (sensu stricto, marine isotope substage (MIS) 5e) in the Bahamas records the nature of sea level and climate change. After a period of quasi-stability for most of the interglaciation, during which reefs grew to +2.5 m, sea level rose rapidly at the end ofthe period, incising notches in older limestone. After briefstillstands at +6 and perhaps +8.5 m, sea level fell with apparent speed to the MIS 5d lowstand and much cooler climatic conditions. It was during this regression from the MIS 5e highstand that the North Atlantic suffered an oceanographic ‘‘reorganization’’ about 11873 ka ago. During this same interval, massive dune-building greatly enlarged the Bahama Islands. Giant waves reshaped exposed lowlands into chevron-shaped beach ridges, ran up on older coastal ridges, and also broke off and threw megaboulders onto and over 20 m-high cliffs. The oolitic rocks recording these features yield concordant whole-rock amino acid ratios across the archipelago. Whether or not the Last Interglaciation serves as an appropriate analog for our ‘‘greenhouse’’ world, it nonetheless reveals the intricate details ofclimatic transitions between warm interglaciations and near glacial conditions."
The picture which emerges is that the post-MPT end interglacials appear to be populated with dramatic, abrupt global climate disruptions which appear to have occurred on decadal to centennial time scales. Given that the Holocene, one of at least 3 post-MPT "extreme" interglacials, may not be immune to this repetitive phenomena, and as it is half a precession cycle old now, and perhaps unlikely to grow that much older, this could very well be the natural climate "noise" from which we must discern our anthropogenic "signal" from.
If we take a stroll between this interglacial and the last one back, the Eemian, we find in the Greenland ice cores that there were 24 Dansgaard-Oeschger oscillations, or abrupt warmings that occurred from just a few years to mere decades that average between 8-10C rises (D-O 19 scored 16C). The nominal difference between earth's cold (glacial) and warm (interglacial) states being on the order of 20C. D-O events average 1470 years, the range being 1-4kyrs.
Sole, Turiel and Llebot writing in Physics Letters A (366  184–189) identified three classes of D-O oscillations in the Greenland GISP2 ice cores A (brief), B (medium) and C (long), reflecting the speed at which the warming relaxes back to the cold glacial state:
“In this work ice-core CO2 time evolution in the period going from 20 to 60 kyr BP  has been qualitatively compared to our temperature cycles, according to the class they belong to. It can be observed in Fig. 6 that class A cycles are completely unrelated to changes in CO2 concentration. We have observed some correlation between B and C cycles and CO2 concentration, but of the opposite sign to the one expected: maxima in atmospheric CO2 concentration tend to correspond to the middle part or the end the cooling period. The role of CO2 in the oscillation phenomena seems to be more related to extend the duration of the cooling phase than to trigger warming. This could explain why cycles not coincident in time with maxima of CO2 (A cycles) rapidly decay back to the cold state. ”
"Nor CO2 concentration either the astronomical cycle change the way in which the warming phase takes place. The coincidence in this phase is strong among all the characterised cycles; also, we have been able to recognise the presence of a similar warming phase in the early stages of the transition from glacial to interglacial age. Our analysis of the warming phase seems to indicate a universal triggering mechanism, what has been related with the possible existence of stochastic resonance [1,13, 21]. It has also been argued that a possible cause for the repetitive sequence of D/O events could be found in the change in the thermohaline Atlantic circulation [2,8,22,25]. However, a cause for this regular arrangement of cycles, together with a justification on the abruptness of the warming phase, is still absent in the scientific literature."
In their work, at least 13 of the 24 D-O oscillations (indeed other workers suggest the same for them all), CO2 was not the agent provocateur of the warmings but served to ameliorate the relaxation back to the cold glacial state, something which might have import whenever we finally do reach the end Holocene. Instead of triggering the abrupt warmings it appears to function as somewhat of a climate "security blanket", if you will.
Therefore in constructing the antithesis, and taking into consideration the precautionary principle, we are left to ponder if reducing CO2’s concentration in the late Holocene atmosphere might actually be the wrong thing to do.
I stated first that due to my having an open mind, you were able to convince me that the appropriate time constant of removal of excess CO2, by the sequestration methods of ocean shell formation, by plant Carbon sequestration, and by rock weathering, that at the present level of CO2 the removal rate is about 2 ppmv per year. This results in a time constant of 30 to 100 years (depending on assumptions). However, there is also a ~100 year sea surface turnover, which directly carries excess dissolved CO2 into the depths and dilutes it. This last term would not greatly speed up removal in much less that 100 years, so is not a major factor yet. However, it would prevent the excess from persisting for thousands of years as has been claimed, and will slow down or even reverse the rise over time, as it becomes fully active.
Thanks Leonard, So what you are saying – versus other literature on this topic – differs because of your presumed sea surface turnover rate. I have thought from reading that literature that rate was about 1,000 years, while you believe it is more like 100 years. I agree that this point is a very important in term of what happens in the long run and will look more closely myself at the various claims concerning this point. After our extented discussions above, I just wanted first to be sure where the difference was.
The surface removal is order of 100 years (less for high latitude currents). The undersea mixing and return is order of 1000 or more years. Thus the total turnover is over 1000 years. However, due to the fact that the water below the thermocline (mixed layer) is about 10 times as deep as the mixed surface layer, and thus dilutes any effect about 10 times, what happens once the surface water is removed and sent to the deep is of little interest (both because it is strongly diluted, and due to it's slow return). It is not the total turnover time that is important, only the surface removal speed.
Thanks for your last clarification which I accept and had, indeed, not known enough about. So many thanks, I love learning new things!!
At the same time, I have been reviewing some of my materials concerning CO2 exchange between the atmosphere and ocean and realize that we have not yet included two important points.
As the ocean gets slightly acidified by increases in concentration of atmospheric CO2 (as it surely will in accordance with known acid-base equilibria chemistry) and as the ocean gets slightly warmer (as direct measurements show that it is) the dynamics and equilibria of CO2 exchange between the atmosphere and surface layers of the oceans continuously change to disfavor the loss of CO2 from the atmosphere to the oceans. These two addition consideration, I think, will explain why the literature suggests time constants far greater than yours for the removal of the EXTRA CO2 from the atmosphere.
A tiny change in atm CO2 would have little effect on those exchange constants and, in that case, your calculations would apply. The fact is, however, that today's CO2 level is 35% greater than before the Industrial Revolution and that is not a small change. As the CO2 level then continues to rise each year those exchange constants will continue to change in the favor of atmospheric CO2
Thus, I beleive that our EXCESS CO2 level of today and even more so for those of the future will last very much longer than your estimate of 100 years. While I can't put a number on the additional time myself, it would seems reasonable to me that these two factors explain to great difference between your estimates provided here and those I have noted in the literature.
Does this make sense to you?
Sorry to butt in here Dr Eric, but can we avoid using the term "increase in acidity" (which to me sounds alarmist) and adopt the more technically correct term "decrease in alkalinity"? From the literature, I believe it is impossible for the oceans to become acidic.
Claims are being made that the ocean is being acidified as the atmospheric partial pressure is increasing. This counters the slight solubility change (it is the difference in partial pressures times the current solubility that determines the direction of exchange). You can't have it both ways-acidification (net in) or net out due to increasing temperature. Please choose and justify.
Leonard, One of us does not understand the equilibria involved here.
As CO2 in the atmosphere increases, more CO2 goes into the oceans. This in turn causes and increase in acidity, due to: Dissolved CO2 + H20 is equiv to H2CO3. and H2CO3 is a weak acid. Thus the ocean becomes slightly more acidic due to:
H2CO3 = H+ + HCO3-.
Now as more CO2 is added to the system and acidity (H+) further increases, the equilibrium position shown above shifts somewhat to the left. Thus an increasingly smaller fraction of future dissolved CO2 (H2CO3, that is) is converted to the bicarbonate anion, HCO3-
This, in turn, shifts the equilibrium
CO2 (gas phase) = CO2 (dissolved, i.e. H2CO3)
to the left and less atm CO2 is thereby dissolved.
So in this case, one does have it both ways. As the acidity of the ocean is increased, an increasingly lower fraction of atm CO2 is dissolved.
Reference: all of this is basic chemical equilibria and this specific example is often included in standard texts on Analytical Chemistry, such as Skoog and West, just to pick one of many.
Leonard, Sorry, I forgot to include the Temp effect on this equilibria.
As temperature increases the equilibria, CO2 (gas) CO2 (dissolved) also shift to the left, further disfavoring the dissolution of CO2 (gas) and thereby increasing the acid-base effect described above.
You again don't include that the upwelling layer of lower CO2 content replaces the surface layer of more acidified CO2 with a time constant less than 100 years. It is not the deep circulation time that is relevant here but the fact that when the more saturated surface layer is removed, water that went down over 1000 years age upwells to replaces it. Thus there is no increased dissolved CO2 in the replacement water (and I am fully aware of the equations of equilibria that convert it to other forms). It is the continual removal of the surface layer and replacement with less saturated water that allows the oceans to dissolve the higher CO2 content efficiently.
You again point to the increase in CO2, which I agree has been partially due to human activity. If the CO2 increase can not be shown by data to significantly increase temperature (part or most of the small 0.8 C rise, or even slightly less than 0.8 C -there is much questionable data in the last 150 year, is not thought to be human caused, even by CAGW supporters, but recovery from the LIA), and if ocean acidification is no problem, and if more CO2 and a slight temperature rise increases crop production, your case for need for action evaporates.
I now observe the claim that even if the average temperature stopped rising, and even falls, that larger regional variation (flooding, drought, etc.) were due to CAGW. These local variations have not been shown to be valid (these occur regularly, and no special difference has been shown). The ocean heat content, rise rate, cyclone energy level, etc. are all within normal variation, and cyclone level is actually at a relative low level.
I think you will agree that speculation does not apply if my account of acid base and temp equilibria is correct, right? So again, consider again what I explained concerning the acid base and temp equilibria constants and I am quite sure you will see that we are continuously heading towards condition in which the oceans ability to remove CO2 is decreasing. Therefore, important equilibrium constants are continuously shifting to the left and that all accounts for EXTRA CO2 residence times that greatly exceed those deduced when these changes are not accounted for.
It did look back at your previous comment and after refreshing my memory on these ocean effects we have been discussion, I now suspect that the following is why the very long residence time of extra CO2 described in the literature are very much longer than your estimate of about 100 years.
This is because of the boundary that separates the atmosphere from the ocean depths. That boundary layer comes into near equilibrium with the atmosphere relatively quicky (in a few decades). Therefore as the acidity (and temperature) of the boundary layer goes up due to the Extra CO2 both of these changes result in an increasingly smaller fraction of CO2 residing in that ocean surface boundary layer.
Therefore, as mixing of that surface layer then occurs with the depths of the oceans, on a (1/e) time scale of a century, the surface layer carries less CO2 to the depths as a result of these acid base and temperature factors.
It is important to recall that the acid base factor being discussed here is not small. Since 1850, the pH of the surface layer has decreased about o.1 of a pH unit. That translates to an increase in acidity (H+ concentration) of about 30%. This, in turn, pushes those equilibrium reactions previously describe to the left, disfavoring CO2 solubility, by the same about.
I am going to have to check all these threads more often! There is some really good new discussion on this one.
To William McClenney (10/22/10:
Thanks for the brief review of the recent literature on interglacial terminations. I think I now understand where Heidi Cullen, the former Climate Change commissarette at the Weather Channel came up with her assurance that the next glaciation would not start for 30,000 years.
To Dr. Grimsrud:
I do not understand the argument in your post of 10/23/10 (3:38 p.m.) concerning ocean chemical buffering of CO2 absorption. [First off, I really, really do not understand how you could have posted that comment from the future, for as I write this, it is 1:15 p.m. on 10/23/10!!!!! If you can indeed travel through time, most of our verification problems should be solved.]
My main concern is this: Isn't the solubility of CO2 in water overwhelmingly a physical process (not a chemical one) dependent on the CO2 concentration is the surface water and surface air layer, the temperature of the water, wind speed, and to a much lesser extent on the salinity of the water at the point of absorption? Further, if chemical buffering has anything to do with rate at which CO2 is dissolved, what is the magnitude of this effect in comparison to the Henry's Law physical process? Finally, did you mean to claim that all or most CO2 dissolved in water is not there "stored" as or remain there as CO2? (This dispute about the existence and/or magnitude of "chemical buffering" of the solubility of CO2 has greatly exercised the two skeptics, Philip Glassman (see "The Acquittal of Carbon Dioxide" at his web site RocketScientist'sJournal.com, discussed mostly in the thread, and Tom Segalstad in various papers.)
There is an interesting recent paper, Wolfgang Knorr (2009), "Is the airborne fraction of anthropogenic CO2 emissions increasing?", Geophys. Res. Lett 36. Contrary to the predictions of various coupled climate-carbon cycle models, there is no trend in that fraction 1850 to present, and that only about 40% of anthropogenic CO2 emissions have stayed in the atmosphere. If there were a lot of "buffering" going on and at an ever increasing rate, and/or the capacity of the ocean to dissolve CO2 through the physical solubility process were diminishing, shouldn't we be seeing some evidence of it? And why the constant fraction, 40%, over 160 years with a rise in CO2 from approx. 285-290 ppmv to 389 ppmv? Knorr does not assay to answer this 2d question. (I got this paper for free, in full journal format, but I can't remember where!) I really don't know much about all of this and want to learn more.
To Leonard Weinstein:
We both agree that increasing the concentration of an infrared absorbing gas in the atmosphere can heat the surface (or as I like to say, sets up a vector for warming there). I noticed that in your excellent discussion with Dr. Grimsrud and others on radiation transfer and "insulation" you never mention "backradiation" (toward the surface) and the role it plays in any GHG warming of the surface that might occur. [You do say at one point that GHGs "absorb thermal radiation and re-radiate it in all directions.") If you have time, could you briefly discuss your "take" on the role of backradiation in GHG heating of the surface? [Of course, Dr. Grimsrud or anyone else is invited to join in.]
Just as a solid thermal insulator passes a small amount of energy in the hot to cold direction, so does a thermal radiation insulator gas pass a small amount of radiation energy from the hot surface to the gas and outward. If there were no absorption of the radiation by the gas, all of the surface radiation energy would directly leave the hot surface and pass directly through the gas. If the gas absorbs the thermal radiation, it re radiates it in all directions. Some of this re radiation is directed back toward the hot source, and the difference between the outgoing and back radiation results in the lowering of the net radiation energy transfer. To answer your question, there is backradiation, but how it affects the temperature depends on one other factor. The case of the thermal radiation in a gas is different from conduction in a solid in that the gas can move. For that case, convection can also transport energy from the hot surface to the gas and outward. Despite the convective energy transfer in a gas, there will be some radiation energy transfer, even if it small compared to the convection. For Earth and Venus, the convection heat transfer dominates radiation heat transfer. In both cases the atmospheric adiabatic lapse rate is formed due to their being sufficient mixing of the atmosphere on the average (the adiabatic lapse rate is not dependent on the radiation as long as the convection is greatly dominate).
The argument on back radiation seems to confuse many people. If you consider the basic radiation equation, there is a term [T(hot)^4-T(cold)^4]. This is, in fact, all there is to backradiation. The present of a slightly cooler body next to a hot body reduces the NET radiation energy over the level without a slightly cooler body present. The back radiation is not heating the surface (heat can only pass from hot to colder), it is reducing the net radiation energy out (thus it is a radiation insulator). For an atmosphere, this radiation reduction, combined with the atmospheric adiabatic lapse rate is making the surface hotter than otherwise. Note the need for the lapse rate effect. The way it makes an atmosphere hotter is to move the location of radiation to space to a high altitude. The radiation to space has to equal the absorbed radiation. This radiation level sets a temperature at the effective altitude where the outgoing radiation is located. The adiabatic lapse rate, which is a temperature gradient, not level (see wiki for details) then adds more temperature toward the ground from adiabatic compression of the atmosphere.
I like Leonard Weinstein's assertion that the reason for the Venus surface temperature to be high is because the effective infrared radiation altitude is high enough to be where the gas temperature is low enough to diminish the outward IR radiation flux. This is an interesting aspect of "Greenhouse effect" to investigate here.
Recall that the so-called "optical depth" or "effective radiation altitude" (ERA) (means the same thing except that the former is reckoned from the ground outward while the other is reckoned from outer space inward) is a strong function of wavelength. In other words, what we call IR "Windows" (wavelength ranges of nil or minimal absorption) are also wavelength ranges where the ERA is much lower – where the atmosphere is much warmer. Thus, the effectively radiating Venus black body temperature is hotter than recently asserted. One will simply "see" (in the infrared) a hot (IR-wise) Black body whose IR spectrum brilliance has dark (cold) lines (Just like we see everyday in a solar fine line spectrum), so the "cold" wavelength range intepopulates a "hot" continuum spectrum that peaking somewhere in the infrared. I have not seen those data yet, but my intuition says they shall occur… they should now be documented. The unlit half-side of the Venus disk is available much of our year for such inspection. It may be that the Venus unlit side is not so cold a black body after all….
Here is an interesting quote from the Harold Kroto who won the 1996 Nobel Prize in Chemistry for the Buckyball. I like his definition.
"I have a four-out-of-five rule for scientific method. Here it is: If you make an observation, develop a theory you think can explain it. Then design some further experiments to test the validity of that theory. If four observations out of five fit, the theory is almost, and I stress almost, certainly right. If only one out of five fits, the theory is almost, almost certainly wrong. We can never say it was wrong. But we can say it’s almost certainly wrong. We must leave the way open for that element of doubt…."
Eloquent but irrelevant, Tom. Falsifiability is a requisite incorporated in the Daubert Standard, and so should certainly be employed here.
@70, 71, 72,
I am pleased to see that epistemological issues are taken up again in this post. It surely is relevant – in the light of the line of reasoning we see in the other posts.
The issue of falsifiability is important though perhaps overrated as standard, but (as usual) I would also like to draw your attention to the way the theory/hypothesis is arrived at. The major distinction being the deductive versus the inductive method (and I claim only the latter is valid).
As an archetype of deductive reasoning we may consider the theories (if they qualify for that) of René Descartes. He often reasoned from completely arbitrary general assumptions and then derived, i.e. deduced specifics from them. You only need to look at the disastrous results of this approach to appreciate its invalidity.
On the other hand, Newton's work shows an inductive approach, starting from observations (including the works of Kepler etc.) and then discovering commonality in specifics to formulate a general theory uniting the observations and enabling predictions about new instances.
It is my strong suspicion that in the case for CO2-driven global warming rests on a deductive approach, with a fuzzy semi-plausible physical absorption/warming mechanism at its core. Once you accept this, circular reasoning gets introduced and you can spend many hours discussing the wrong things.
More to follow. Tom Curtis, I will comment on your interesting effort to refute the falsification criterion as soon as I can.
@76 Tom Curtis,
In the Post G: Greenhouse Gas Effects there is a discussion going on about what you here refer to as 'well established laws' or, rather, the application of such laws to a complex physical configuration as earth and atmosphere. As you may appreciate when you read the entries (or a paper such as the one by 'G&T'), there is confusion and controversy as to what exactly 'a greenhouse effect' exactly is (claimed to be). At this moment I am getting all this sorted out, for myself at least. But I think it is an ominous sign that many descriptions within climatology are just plainly wrong (absolutely*, that is), while we are still digging for a proper account of that theory! I am therefore eagerly waiting for Dr. Eric's response to this question (contained in GGE-@81). Leonard Weinstein says he understands by now, but apparently not on the basis of one particular source.
* By the way, I do not agree with your use of the concept 'absolute' in epistemology. It sounds as you are taking the popular notion that 'absolutes' do not exist (given you phrase such as "Of course, even that is not absolute."). I think that a proper theory of knowledge should acknowledge that certainty is possible and contextual (to the whole body of integrated knowledge). One should avoid confusing 'the absolute' with omniscience.
Last but not least, I claim that the inductive approach is the proper way for ARRIVING at a VALID hypothesis. Inductive reasoning is self-correcting as the example of Lavoisier (on acidity) illustrates. There is ample evidence in the history of science and I am glad that you also refer to such specifics. I will look them up and comment where I think it is useful. The same applies to your concept 'abduction' which I find odd as a third category. About deduction: this is a valid form of reasoning of course, but that does not mean that Descartes' method or any rationalistic method is right in formulating an hypothesis and theory.
I have suggested elsewhere that we consider specific examples for obtaining an appreciation of epistemology involved in true science. I must say that even within climatology I have recently seen several examples which struck me as completely in accordance with the inductive approach. It will be of no surprise that both have to do with the sun as a prime factor in controlling climate, not by way of dogma, but through the observation of consistent, possibly causal correlations. New entries will follow accordingly.
Speaking about "scientific methods"…
Here is an interesting example of peer reviewed science, in “marginal areas” as “American diabetes association”:
Doesn’t it remind the state of art in some other areas of “sciences”, as climatology?