by John R. Christy, University of Alabama in Huntsville, February 2, 2016
I am John R. Christy, Distinguished Professor of Atmospheric Science, Alabama’s State Climatologist and Director of the Earth System Science Center at The University of Alabama in Huntsville. I have served as Lead Author, Contributing Author and Reviewer of United Nations IPCC assessments, have been awarded NASA’s Medal for Exceptional Scientific Achievement, and in 2002 was elected a Fellow of the American Meteorological Society.
It is a privilege for me to offer my analysis of the current situation regarding (1) the temperature datasets used to study climate, (2) our basic understanding of climate change and (3) the effect that regulations, such as the Paris agreement, might have on climate. I have also attached an extract from my Senate Testimony last December in which I address (1) the popular notion that extreme climate events are increasing due to human- induced climate change (they are not), and (2) the unfortunate direction research in this area has taken.
My research area might be best described as building datasets from scratch to advance our understanding of what the climate is doing and why – an activity I began as a teenager over 50 years ago. I have used traditional surface observations as well as measurements from balloons and satellites to document the climate story. Many of our UAH datasets are used to test hypotheses of climate variability and change.
(1.1) Upper air temperature data from satellites and balloons
I shall begin with a discussion that was precipitated by an increasingly active campaign of negative assertions made against the observations, i.e. the data, of upper air temperatures.
Figure 1 in particular has drawn considerable attention from those who view the climate system as undergoing a rapid, human-caused transformation into a climate to which people would have great difficulty adapting. This simple chart tells the story that the average model projection, on which their fears (or hopes?) are based, does poorly for the fundamental temperature metric that is allegedly the most responsive to extra greenhouse gases – the bulk atmospheric temperature of the layer from the surface to 50,000ft.
[The layer shown is known as the mid-troposphere or MT and is used because it overlaps with the region of the tropical atmosphere that has the largest anticipated signature of the greenhouse response by bulk mass – between 20,000 and 50,000 feet.]
The chart indicates that the theory of how climate changes occur, and the associated impact of extra greenhouse gases, is not understood well enough to even reproduce the past climate [much more in section (2)]. Indeed, the models clearly over- cook the atmosphere. The issue for congress here is that such demonstrably deficient model projections are being used to make policy.
Because this result challenges the current theory of greenhouse warming in relatively straightforward fashion, there have been several well-funded attacks on those of us who build and use such datasets and on the datasets themselves. As a climate scientist I’ve found myself, along with fellow like-minded colleagues, tossed into a world more closely associated with character assassination and misdirection, found in Washington politics for example, rather than objective, dispassionate discourse commonly assumed for the scientific endeavor.
Investigations of us by congress and the media are spurred by the idea that anyone who disagrees with the climate establishment’s view of dangerous climate change must be on the payroll of scurrilous organizations or otherwise mentally deficient. Also thrust into this milieu is promotional material, i.e., propaganda, attempting to discredit these data (and researchers) with claims that amount to nothing.
Several of these allegations against the data appeared a few weeks ago in the form of a well-made video. I shall address the main assertions with the following material, which in similar form has appeared in the peer-reviewed literature through the years.
The video of interest was promoted by a climate change pressure group, Yale Climate Connections, in which well-known scientists make claims that are mostly meaningless or completely wrong relative to the evidence in Fig. 1.
I wish to make four points regarding the video and demonstrate the misdirection for which such agendized videos, along with a happily mimicking media, are so famous.
First, the claim is made the satellites do not measure temperature. In reality, the sensors on satellites measure temperature by emitted radiation – the same method that a physician uses to measure your body temperature to high precision using an ear probe. Atmospheric oxygen emits microwaves, the intensity of which is directly proportional to the temperature of the oxygen, and thus the atmosphere.
That the satellites measure temperature is evident by the following chart which compares our UAH satellite data with temperatures calculated from balloon thermistors. As an aside, most surface temperature measurements are indirect, using electronic resistance.
Second, the scientists claim that the vertical drop (orbital decay) of the satellites due to atmospheric friction causes spurious cooling through time. This vertical fall has an immeasurable impact on the layer (MT) used here and so is a meaningless claim. In much earlier versions of another layer product (LT or Lower Troposphere), this was a problem, but was easily corrected almost 20 years ago. Thus, bringing up issues that affected a different variable that, in any case, was fixed many years ago is a clear misdirection that, in my view, demonstrates the weakness of their position.
Third, the scientists speak of the spurious temperature changes that occur as the satellites drift in the east-west direction, the so-called diurnal drift problem (which was first detected and accounted for by us). They speak of a sign error in the correction procedure that changed the trend. Again, this error was not a factor in the MT layer in Fig. 1, but for the different LT layer. And, again, this issue was dealt with for LT 10 years ago.
Finally, though not specifically mentioned in this video, some of these scientists claim Fig. 1 above is somehow manipulated to hide their belief in the prowess and validity of the climate models. To this, on the contrary, I say that we have displayed the data in its most meaningful way.
The issue here is the rate of warming of the bulk atmosphere, i.e., the trend. This metric tells us how rapidly heat is accumulating in the atmosphere – the fundamental metric of global warming. To depict this visually, I have adjusted all of the datasets so they have a common origin.
Think of this analogy: I have run over 500 races in the past 25 years, and in each one all of the runners start at the same place at the same time for the simple purpose of determining who is fastest and by how much at the finish line. Obviously, the overall relative speed of the runners is most clearly determined by their placement as they cross the finish line – but they must all start together.
In the same way I constructed the chart so the trend line of all of the temperature time series starts at the same point in magnitude and time (zero value at 1979) so the viewer may see how wide the spread is at the finish line (2015). One way to look at this is seen in Fig. 3 where I provide what is seen in Fig. 1 except this is only the trend line without the variations that occur from year due to volcanoes and such. This is analogous to plotting the overall average speed of a runner along the course even though they likely ran slower on an uphill, and faster on a downhill.
This image indicates the models, on average, warm this global layer about 2.5 times faster than the observations indicate.
This is a significant difference that has not been explained and indicates the theory of greenhouse impact on atmospheric temperature is not sufficiently known to even reproduce what has already happened. We are not talking about 10 or 15 years here, but 37 years – well over a third of a century. That two very independent types of measuring systems (balloons and satellites) constructed by a variety of institutions (government, university, private) all showing the much slower rate of warming gives high confidence in its result.
Thus, the evidence here strongly suggests the theory, as embodied in models, goes much too far in forcing the atmosphere to retain heat when in reality the atmosphere has a means to relinquish that heat and thus warms at a much slower rate.
I’ve shown here that for the global bulk atmosphere, the models over-warm the atmosphere by a factor of about 2.5. As a further note, if one focuses on the tropics, the models show an even stronger greenhouse warming in this layer.
However, a similar calculation with observations as shown in Fig. 3 indicates the models over-warm the tropical atmosphere by a factor of approximately 3, (Models +0.265, Satellites +0.095, Balloons +0.073 °C/decade) again indicating the current theory is at odds with the facts. (again, see section 2.)
It is a bold strategy in my view to actively promote the output of theoretical climate models while attacking the multiple lines of evidence from observations.
Note that none of the observational datasets are perfect and continued scrutiny is healthy, but when multiple, independent groups generate the datasets and when the results for two completely independent systems (balloons and satellites) agree closely with each other and disagree with the model output, one is left scratching one’s head at the decision to launch an offensive against the data. This doesn’t make scientific sense to me.
(1.2) Surface temperature issues
There are several issues regarding surface temperature datasets that are too involved to discuss in this material. I shall focus on a few points with which I am familiar and on which I have published.
(1.2.a) Surface temperature as a metric for detecting the influence of the increasing concentrations of greenhouse gases
One of my many climate interests is the way surface temperatures are measured and how surface temperatures, especially over land, are affected by their surroundings. In several papers (Christy et al. 2006 J. Climate, Christy et al. 2009 J. Climate, Christy 2013 J. Appl. Meteor. Clim., Christy et al. 2016 J. Appl. Meteor. Clim.), I closely examined individual stations in different regions and have come to the conclusion that the magnitude of the relatively small signal we seek in human-induced climate change is easily convoluted by the growth of infrastructure around the thermometer stations and the variety of changes these stations undergo through time, as well as the variability of the natural ups and downs of climate.
It is difficult to adjust for these contaminating factors to extract a pure dataset for greenhouse detection because often the non-climatic influence comes along very gradually just as is expected of the response to the enhanced greenhouse effect.
In examining ocean temperatures (Christy et al. 2001, Geophys. Res. Lett.) I discovered that the trends of the water temperature (1m depth) do not track well with those of the air temperature just above the water (3m), even if both are measured on the same buoy over 20 years. This is important for the discussion below where NOAA used marine air temperatures to adjust water temperature measurements from ships.
There are many other factors that render surface temperature datasets to be of low effectiveness for the detection of enhanced greenhouse warming,
- lack of systematic geographical coverage in time,
- unsystematic measuring methods and instrumentation in time and space,
- the point measurement represents at best a tiny, local area and
- is easily impacted by slight changes in the surroundings, which can occur for example when a station moves.
There have been huge efforts to try and adjust the raw surface data to give a time series that would represent that of a pristine environment, and I have led or been a part in some of these (e.g. for Central California in Christy et al. 2006 and East Africa in Christy et al. 2009 and Christy 2013).
Thus, having experience in building surface, satellite and balloon temperature datasets, and taking into account the signal we are looking for to detect the enhanced greenhouse effect, the evidence suggests to me that utilizing the bulk atmospheric measurements provides the best opportunity to answer questions about the climate’s response to this human-induced change in atmospheric composition.
The deep atmosphere is much more coherent in space and time in terms of its variations. It is not affected by human development at the surface. It is measured systematically. To be sure, satellite and balloon temperatures require their own adjustments and cannot be considered “perfect”, but do offer an independence from one another to allow direct comparison studies.
Regarding the detection of the enhanced greenhouse effect, the troposphere, as indicated by models, happens to be the atmospheric region that will respond the most, i.e. warm the fastest, and thus, in my view, is a metric that provides a better way to detect human influence on the climate.
(1.2.b) The new NOAA surface temperature dataset
A series of papers appeared last year (including Huang et al. 2015 J. Climate, Karl et al. 2015 Science) describing a new surface temperature dataset constructed by NOAA which indicated a bit more warming in the past 10 to 25 years than the previous versions. The key change dealt with seawater temperatures in the dataset now known as ERSSTv4. This change introduced an additional warming into the record from about 1990 onward. The main reason for this new warming, as the authors note, was the adjustment applied to buoy data, adding about +0.12 °C to the buoy readings.
In 1980, only about 10 percent of the data reports were from buoys, but by 2000 about 90 percent were buoy data. Thus, because the influence of the buoy data grew significantly through time, the simple addition of a bias to all the buoys from the beginning created a warmer trend as they became the dominate source of information.
Some background is necessary. Unlike satellite and balloon datasets which measure a systematic quantity (essentially atmospheric air temperature), surface temperature datasets are a mixture of air (over land) and water (over ocean) temperatures measured over a considerable range of instruments, exposures and methods. Over land, weather stations measure the temperature of the air in varying types of instrument shelters and by varying techniques at a level about 5 ft above the ground. Over the ocean, however, the temperature utilized is that of the water itself, not the air above, so traditional global surface datasets do not measure a homogenous physical parameter over land versus ocean.
Further, the depth of the water temperature measurement is quite varied from 2 ft to 50 ft or so, by methods that range from buckets drawn up on deck into which a thermometer is inserted to engine-intake temperatures much deeper in the water and to buoys, drifting or moored to the bottom. So the fact temperature varies by depth is an issue to tackle before the possibility of constructing a systematic dataset may be attempted. Then too, the measurements are not spatially or temporally consistent with large regions, such as Africa and the southern oceans, unmeasured.
Keep in mind that even though the trend of this NOAA dataset became more positive in the past 10 to 20 years, it is still below climate model projections over the longer term. For longer periods, such as the period since 1979 when satellites began measuring bulk atmospheric temperatures, the new global dataset is similar to that of the Hadley Centre (1979-2015: NOAA +0.155 °C/decade, Hadley Centre UKMet, +0.165 °C/decade). However, there are questions that remain concerning the new NOAA seawater dataset, especially how it indicates more warming in the last 20 years than others.
Figure 4 displays the ocean trends for the region 20S to 60N (i.e. tropical and northern hemisphere oceans – there was too little data south of 20S for generating near-surface air temperatures there). There are 4 datasets represented, NOAA (NOAA, red), Hadley Centre (HadCRUT4, orange), a preliminary near-surface air temperature over the oceans by my graduate student Rob Junod (yellow) and the UAH deep layer air temperature from satellites (blue). Both NOAA and HadCRUT4 are temperatures of the seawater near the surface, so should be the same.
NOAA used a curious reference variable to calibrate the water temperatures measured from ship intakes – the Night Marine Air Temperature (NMAT). This is curious because there are considerable adjustments required for the NMATs themselves, i.e. corrections for height of ship deck, etc. In any case, from this, the buoy data were then adjusted to match the ship data. It appears, then, that the foundational adjustment process depends on NMATs to adjust the ship data to then adjust the buoy data.
The final product from NOAA mixes all of these together, and because the geographic representation of the different systems changed dramatically (as noted, from approximately 10% buoys and 90% ships in 1980 to 90% buoys and 10% ships today – Huang et al. 2015), an adjustment applied to the buoys will automatically influence the trend.
I’m aware that the Committee sought information about this curious process and asked NOAA to generate datasets based only on consistent measuring systems, i.e. ships alone, buoys alone and NMATs alone, to see if one system might have impacted the trends improperly due to distribution changes. NOAA was unable to accommodate this request.
At the same time I asked my graduate student, Rob Junod, to do the work for NMAT. What is presented here is preliminary, but follows much of the previous work on NMATs (developed at the National Oceanographic Centre and the Hadley Centre in the UK) with that added advantage of being updated to 2014.
The best geographical data coverage was found to be 20°S to 60°N, so this area was also applied to the other datasets for an apples to apples comparison. The results are shown in Fig. 4 in which all trends end in 2014 but cover periods in two-year increments from 20 years to 10 years.
A number of observations are evident in Fig. 4.
- In terms of the temperature trend, the air temperatures are less than those of the water (as indicated in my 2001 study mentioned above.)
- NOAA warms the fastest in all periods.
- In the past 10-14 years, the trends of the HadCRUT4 agree better with the near-surface air temperature dataset (being near zero and supporting the notion of a hiatus) than with the trends from its physically-identical quantity from NOAA.
- The magnitude of the NMAT trends lies between the trends of the deep atmospheric and sea water.
This figure generates a number of data quality questions too.
- If NMATs were used to calibrate the ship temperatures and then the ships were used to calibrate the buoy temperatures, why does the NOAA dataset differ so much from its basic reference point – NMATs?
- What do the time series look like and what are the sub-period trends for seawater under the condition that only ships and/or only buoys are used to build the dataset for the past 20-25 years?
- What does the time series of NOAA’s NMAT (i.e. their reference) dataset show?
The real science questions here are those which have significant importance to the understanding of how extra greenhouse gases might affect the climate as shown in the following section.
(2) How well do we understand climate change?
A critical scientific goal in our era is to determine whether emissions from human activities impact the climate and if so by how much. This is made especially difficult because we know the climate system already is subject to significant changes without the influence of humans.
Because there is no measuring device that explicitly determines the cause of the climate changes we can measure, such as temperature, our science must take a different approach to seek understanding as to what causes the changes, i.e. how much is natural and how much is human induced. The basic approach today utilizes climate models. (The projections of these models are being utilized for carbon policies as well.)
It is important to understand that output from these models, (i.e. projections of the future climate and the specific link that increasing CO2 might have on the climate) are properly defined as scientific hypotheses or claims – model output cannot be considered as providing proof of the links between climate variations and greenhouse gases.
These models are complex computer programs which attempt to describe through mathematical equations as many factors that affect the climate as is possible and thus estimate how the climate might change in the future. The model, it is hoped, will provide accurate responses of the climate variables, like temperature, when extra greenhouse gases are included in the model. However, the equations for nearly all of the important climate processes are not exact, representing the best approximations modelers can devise and that computers can handle at this point.
A fundamental aspect of the scientific method is that if we say we understand a system (such as the climate system) then we should be able to predict its behavior.
If we are unable to make accurate predictions, then at least some of the factors in the system are not well defined or perhaps even missing. [Note, however, that merely replicating the behavior of the system (i.e. reproducing “what” the climate does) does not guarantee that the fundamental physics are well-known. In other words, it is possible to obtain the right answer for the wrong reasons, i.e. getting the “what” of climate right but missing the “why”.]
Do we understand how greenhouse gases affect the climate, i.e. the link between emissions and climate effects?
As noted above, a very basic metric for climate studies is the temperature of the bulk atmospheric layer known as the troposphere, roughly from the surface to 50,000 ft altitude. This is the layer that, according to models, should warm significantly as CO2 increases – even faster than the surface. Unlike the surface temperature, this bulk temperature informs us about the crux of the global warming question – how much heat is accumulating in the global atmosphere? And, this CO2- caused warming should be easily detectible by now, according to models.
This provides a good test of how well we understand the climate system because since 1979 we have had two independent means of monitoring this layer – satellites from above and balloons with thermometers released from the surface.
I was able to access 102 CMIP-5 rcp4.5 (representative concentration pathways) climate model simulations of the atmospheric temperatures for the tropospheric layer and generate bulk temperatures from the models for an apples-to-apples comparison with the observations from satellites and balloons. These models were developed in institutions throughout the world and used in the IPCC AR5 Scientific Assessment (2013).
The information in this figure provides clear evidence that the models have a strong tendency to over-warm the atmosphere relative to actual observations. On average the models warm the global atmosphere at a rate 2.5 times that of the real world.
This is not a short-term, specially-selected episode, but represents the past 37 years, over a third of a century. This is also the period with the highest concentration of greenhouse gases and thus the period in which the response should be of largest magnitude.
Following the scientific method of testing claims against data, we would conclude that the models do not accurately represent at least some of the important processes that impact the climate because they were unable to “predict” what has already occurred.
In other words, these models failed at the simple test of telling us “what” has already happened, and thus would not be in a position to give us a confident answer to “what” may happen in the future and “why.” As such, they would be of highly questionable value in determining policy that should depend on a very confident understanding of how the climate system works.
There is a related climate metric that also utilizes atmospheric temperature which in models has an even larger response than that of the global average shown above. This metric, then, provides a stronger test for understanding how well models perform regarding greenhouse gases specifically. In the models, the tropical atmosphere warms significantly in response to the added greenhouse gases – more so than that of the global average atmospheric temperature.
In the tropical comparison here, the disparity between models and observations is even greater, with models on average warming this atmospheric region by a factor of three times greater than in reality.
Such a result re-enforces the implication above that the models have much improvement to undergo before we may have confidence they will provide information about what the climate may do in the future or even why the climate varies as it does. For the issue at hand, estimates of how the global temperature might be affected by emission reductions from regulations would be exaggerated and not reliable.
(3) Climate Impact of Regulations (i.e. Paris) Will Not Be Attributable or Detectable
No one knows the climate impact of the proposed carbon emission reductions agreed to in Paris. The main reason for this is there is considerable latitude for countries to do as little or as much as they desire. Examining the history of global carbon emissions, it is clear that countries, especially developing countries, will continue to seek to expand energy use through carbon combustion because of their affordability in providing considerable positive benefits to their citizens.
In any case, impact on global temperature for current and proposed reductions in greenhouse gases will be tiny at best.
To demonstrate this, let us assume, for example, that the total emissions from the United States were reduced to zero, as of last May 13th, 2015 (the date of a hearing at which I testified). In other words as of that day and going forward, there would be no industry, no cars, no utilities, no people – i.e. the United States would cease to exist as of that day. Regulations, of course, will only reduce emissions a small amount, but to make the point of how minuscule the regulatory impact will be, we shall simply go way beyond reality and cause the United States to vanish. With this we shall attempt to answer the question of climate change impact due to emissions reductions.
Using the U.N. IPCC impact tool known as Model for the Assessment of Greenhouse-gas Induced Climate Change or MAGICC, graduate student Rob Junod and I reduced the projected growth in total global emissions by U.S. emission contribution starting on this date and continuing on. We also used the value of the equilibrium climate sensitivity as determined from empirical techniques of 1.8 °C.
After 50 years, the impact as determined by these model calculations would be only 0.05 to 0.08 °C – an amount less than that which the global temperature fluctuates from month to month. [These calculations used emission scenarios A1B-AIM and AIF-MI with U.S. emissions comprising 14 percent to 17 percent of the 2015 global emissions. There is evidence that the climate sensitivity is less than 1.8 °C, which would further lower these projections.]
As noted, the impact on global emission and global climate of the recent agreements in Paris regarding global emissions is not exactly quantifiable. Knowing how each country will behave regarding their emissions is essentially impossible to predict besides the added issue of not knowing how energy systems themselves will evolve over time.
Because halting the emissions of our entire country would have such a tiny calculated impact on global climate, it is obvious that fractional reductions in emissions through regulation would produce imperceptible results.
In other words, there would be no evidence in the future to demonstrate that a particular climate impact was induced by the proposed and enacted regulations. Thus, the regulations will have no meaningful or useful consequence on the physical climate system – even if one believes climate models are useful tools for prediction.
Climate change is a wide-ranging topic with many difficulties. Our basic knowledge about what the climate is doing (i.e. measurements) is plagued by uncertainties.
In my testimony today I have given evidence that the bulk atmospheric temperature is measured well-enough to demonstrate that our understanding of how greenhouse gases affect the climate is significantly inadequate to explain the climate since 1979.
In particular, the actual change of the fundamental metric of the greenhouse warming signature – the bulk atmospheric temperature where models indicate the most direct evidence for greenhouse warming should lie – is significantly misrepresented by the models. Though no dataset is perfect, the way in which surface datasets have been constructed leaves many unanswered questions, especially for the recent NOAA update which shows more warming than the others.
Finally, regulations already enforced or being proposed, such as those from the Paris Agreement, will have virtually no impact on whatever the climate is going to do.
This appendix is an extract from my written testimony presented at the following Hearing:
U.S. Senate Committee on Commerce, Science, & Transportation
Subcommittee on Space, Science and Competitiveness 8 Dec 2015
Testimony of John R. Christy University of Alabama in Huntsville.
Alleged impacts of human-induced climate changes regarding extreme events
Much of the alarm related to increasing greenhouse gas concentrations shifted in the past decade from global temperature changes to changes in extreme events, i.e. those events which typically have a negative impact on the economy. These events may be heat waves, floods, hurricanes, etc.
In terms of heat waves, below is the number of 100 °F days observed in the U.S. from a controlled set of weather stations. It is not only clear that hot days have not increased, but it is interesting that in the most recent years there has been a relative dearth of them.
Forest and wild fires are documented for the US. The evidence below indicates there has not been any change in frequency of wildfires. Acreage (not shown) shows little change as well.
The two figures above demonstrate that fire events have not increased in frequency in the United States during the past several decades.
The claims that droughts and floods are increasing may be examined by the observational record as well.
The two figures above demonstrate that moisture conditions have not shown a tendency to have decreased (more drought) or increased (more large-scale wetness). Such information is rarely consulted when it is more convenient simply to make unsubstantiated claims that moisture extremes, i.e. droughts and floods (which have always occurred), are somehow becoming even more extreme.
Over shorter periods and in certain locations, there is evidence that the heaviest precipitation events are tending to be greater. This is not a universal phenomenon and it has not been established that such changes may be due to changes in greenhouse gas concentrations as demonstrated earlier because the model projections are unable to reproduce the simplest of metrics.
It is a simple matter to find documentation of the ever-rising production of grains. One wonders about the Federal Council on Environmental Quality’s allegation that there has been “harm to agriculture” from human-induced climate change because when viewing the total growth in production, which appears to be accelerating, one would assume no “harm” has been done during a period of rising greenhouse gases.
With the evidence in these examples above, it is obviously difficult to establish the claims about worsening conditions due to human-caused climate change, or more generally that any change could be directly linked to increasing CO2.
This point also relates to the issue of climate model capability noted earlier. It is clear that climate models fall short on some very basic issues of climate variability, being unable to reproduce “what” has happened regarding global temperature, and therefore not knowing “why” any of it happened. It is therefore premature to claim that one knows the causes for changes in various exotic measures of weather, such as rainfall intensity over short periods, which are not even explicitly generated in climate model output.
The Disappointing Scientific Process
I have written much for previous congressional hearings and other venues about the failure of the scientific community to objectively approach the study of climate and climate change. (See Appendix) Climate science is a murky science with large uncertainties on many critical components such as cloud distributions and surface heat exchanges. As mentioned above, there is no objective instrumentation that can tell us “why” changes occur. That being the case, we are left with hypotheses (claims) to put forward and then to test.
The information given above, in my view, is clear evidence that the current theoretical understanding of “why” the climate changes, as embodied in models (and on which current policy is based), fails such tests. Indeed, the theoretical (model) view as expressed in the IPCC AR5 in every case overestimated the bulk tropical atmospheric temperature response of extra greenhouse gases (see above and IPCC Supplementary Material Figure 10.SM.1) indicating the theoretical understanding of the climate response is too sensitive to greenhouse gases.
One problem with our science relates to the funding process for climate studies, the vast majority of which is provided through federal agencies. Funding decisions are decided by people, and people have biases.
Our science has also seen the move toward “consensus” science where “agreement” between people and groups is elevated above determined, objective investigation. The sad progression of events here has even led to congressional investigations designed to silence (with some success) those whose voices, including my own, have challenged the politically-correct views on climate (i.e. congressional investigation by Rep. Grijalva, 22 Feb 2015.)
Today, funding decisions are made by review panels. In this process, many proposals for funding are submitted to the agencies, but the agencies only have a fraction of the funds available to support the proposals, so only a few proposals can be funded and these are selected by panels.
In the area of climate, it is clear the agencies are convinced of the consensus view of dangerous climate change as indicated by their various statements and press releases on the issue. Therefore, when a contrarian proposal is submitted that seeks to discover other possible explanations besides greenhouse gases for the small changes we now see, or one that seeks to rigorously and objectively investigate climate model output, there is virtually no chance for funding.
This occurs because the panel determines by majority vote whom to fund, and with tight competition, any bias by just a couple of panel members against a contrarian proposal is sufficient for rejection. Of course, the agencies will claim all is done in complete objectivity, but that would be precisely the expected response of someone already within the “consensus” and whose agency has stated its position on climate change.
This brings me to “consensus science.”
The term “consensus science” will often be appealed to regarding arguments about climate change to bolster an assertion. This is a form of “argument from authority.” Consensus, however, is a political notion, not a scientific notion.
As I testified to the Inter-Academy Council in June 2010, wrote in Nature that same year (Christy 2010), and documented in my written testimony for several congressional hearings (e.g., House Space, Science and Technology, 31 Mar 2011) the IPCC and other similar Assessments do not represent for me a consensus of much more than the consensus of those selected to agree with a particular consensus.
The content of these climate reports is actually under the control of a relatively small number of individuals – I often refer to them as the “climate establishment” – who through the years, in my opinion, came to act as gatekeepers of scientific opinion and information, rather than brokers. The voices of those of us who object to various statements and emphases in these assessments are by-in-large dismissed rather than accommodated. This establishment includes the same individuals who become the “experts” called on to promote IPCC claims in government reports such as the endangerment finding by the Environmental Protection Agency.
As outlined in my previous testimonies, these “experts” become the authors and evaluators of their own research relative to research which challenges their work. This becomes an obvious conflict of interest. But with the luxury of having the “last word” as “expert” authors of the reports, alternative views vanish.
This is not a process that provides the best information to the peoples’ representatives. The U.S. Congress must have the full range of views on issues such as climate change which are (a) characterized by considerable ambiguity (see model results) (b) used to promote regulatory actions which will be economically detrimental to the American people and, most ironically, (c) will have no impact on whatever the climate will do.
I’ve often stated that climate science is a “murky” science. We do not have laboratory methods of testing our hypotheses as many other sciences do. As a result what passes for science includes, opinion, arguments-from-authority, dramatic press releases, and fuzzy notions of consensus generated by preselected groups. This is not science.
We know from Climategate emails and many other sources that the IPCC has had problems with those who take different positions on climate change than what the IPCC promotes. There is another way to deal with this however.
Since the IPCC activity and climate research in general is funded by U.S. taxpayers, then I propose that five to ten percent of the funds be allocated to a group of well-credentialed scientists to produce an assessment that expresses legitimate, alternative hypotheses that have been (in their view) marginalized, misrepresented or ignored in previous IPCC reports (and thus the EPA Endangerment Finding and National Climate Assessments).
Such activities are often called “Red Team” reports and are widely used in government and industry. Decisions regarding funding for “Red Teams” should not be placed in the hands of the current “establishment” but in panels populated by credentialed scientists who have experience in examining these issues.
Some efforts along this line have arisen from the private sector (i.e. The Non-governmental International Panel on Climate Change at http://nipccreport.org/ and Michaels (2012) ADDENDUM:Global Climate Change Impacts in the United States).
I believe policymakers, with the public’s purse, should actively support the assembling all of the information that is vital to addressing this murky and wicked science, since the public will ultimately pay the cost of any legislation alleged to deal with climate.
Topics to be addressed in this “Red Team” assessment, for example, would include
- evidence for a low climate sensitivity to increasing greenhouse gases,
- the role and importance of natural, unforced variability,
- a rigorous and independent evaluation of climate model output,
- a thorough discussion of uncertainty,
- a focus on metrics that most directly relate to the rate of accumulation of heat in the climate system,
- analysis of the many consequences, including benefits, that result from CO2 increases, and
- the importance that affordable and accessible energy has to human health and welfare.
What this proposal seeks is to provide to the Congress and other policymakers a parallel, scientifically-based assessment regarding the state of climate science which addresses issues which here-to-for have been un- or under-represented by previous tax-payer funded, government-directed climate reports. In other words, our policymakers need to see the entire range of findings regarding climate change.
Summary of Extract
The messages of the two points outlined in the extract above are:
- the claims about increases in frequency and intensity of extreme events are generally not supported by actual observations and,
- official information about climate science is largely controlled by agencies through
- funding choices for research and
- by the carefully- selected (i.e. biased) authorship of reports such as the EPA Endangerment Finding and the National Climate Assessment.