In my last post I speculated on why belief in global warming was so closely associated with a left-wing political stance and suggested it was because global warming gave people who were anti-capitalist the “smoking gun” they needed to attack large corporations and conservative governments. Whether that is true or not, the question you might legitimately ask is why should that matter? Just because a scientific theory provides ammunition for a political belief does not mean it is invalid, does it?
Well, the problem is, it does, or it can.
If a
scientific theory adds support to a belief or attitude or offers any some sort
of advantage to the researcher there is a significant risk of confirmation bias.
The idea of
confirmation bias might seem odd because we imagine that all scientists are
keen to confirm their hypotheses but what many people don’t realise
is that when doing research you not only have to try and validate your
hypothesis, you have to do your best to invalidate
it. This is called scientific rigour. Confirmation bias can occur in many ways
in science: it can lead to researchers to stop testing once they have got the
result they seek; it can lead them to select a preferred result out of
ambiguous data; it can cause them to “round off” calculations or eliminate
things such as ‘outliers’ in data. It is also, I am afraid, embedded in the “peer review” system. Confirmation bias on a secondary level is evident in
studies such as the one that found “97% of climate scientists accept the global
warming theory” and the use of "h-scores" (number of times cited in papers) to try an discredit Bjorn Lomborg.
In most
scientific research the confirmation bias arises from the ambitions
of the researcher. A notorious case was that of Dr William McBride, the first person to link Thalidomide to birth defects who, 20 years later
tried to show that the morning sickness drug Debendox also produced
defects. It was revealed he had falsified his results to reach his
conclusion. This was a case of one person wishing to make a second valuable discovery.
The desire to prove that global warming is “real” is held by many millions of
people and which raises particular concerns for the whole field of climate research.
Because there are so many people who
want, a priori, the theory of global
warming to be true, research in this area must be subject to even more than
usually stringent methodological rigour.
Unfortunately,
research into global warming is not more rigorous that other research. This is
not due to any failing by researchers but because of inherent problems in the
area itself. Those problems include the following:
Lack of a Control
If we test a
drug for a particular illness, we cannot simply give the drug to a group of people
who have the illness and see how many get better. We have to have a control group which does not take the
drug and see how many of them get better without taking the medication.
The first
problem with testing the effect of greenhouse gas emissions on global
temperatures is that we do not have a
control. We do not have an identical Earth without the emissions to compare
with this one. That means we are unable to isolate greenhouse gases as a factor
in warming. The lack of a control is not simply an inconvenient hurdle to be
overcome in climate research, it is a
major obstacle. Without a “control Earth” researchers have to try and
extrapolate climate data from the recent past and generate projections of what
temperatures might have been if not
for rising CO2 levels. But this does not constitute having a real control because these “what if” projections are
themselves hypothetical and also require to be validated. This
process is compounded by the next problem:
Incomplete and Uncalibrated Data.
One of the
major problems climatologists face is that we have only had accurate comprehensive climate data for less than a century. When the press announces
things like “Hottest March on record” they mean, since records have been kept,
i.e. about a hundred years ago. To make matters worse, the climate data that
has existed over the last few centuries has been recorded using a variety of
methods or varying accuracy. In Australia for example, temperature and rainfall
readings in the outback were for many years reported by people with a thermometer
and a rain gauge on a post outside the house. Vast areas of the planet have had
no proper means of recording data until satellites started to take measurements
50 years ago.
Given this
paucity of uniform, calibrated data, climate scientists have to rely for
historical data on proxies, that is to say, things like
lacustrine and marine depositions and tree growth rings. This research in turn
depends on other fields of research that seek to correlate things such as temperature and rainfall or temperature and CO2 levels with plant growth. (That process is
complicated in itself because higher temperatures and higher CO2 levels can
independently affect growth.)
While
patterns of climate and atmospheric CO2 levels can be painstakingly derived
from proxies, those data are compromised by:
Error Margins
Calculations
of historical and even present global temperatures, gas emissions, ice-cap
thicknesses etc are not precise. All such measurements entail ranges: e.g. the average summer
temperatures in Britain in 1850 might be calculated as 18 – 20o C
and CO2 levels as 240-260 ppm. (These are simply examples not
real figures) Now, many people might think
that when you start to plot graphs and derive correlations, that error margins
somehow cancel themselves out. That is not necessarily true. Self-cancellation
assumes that if we overestimate one figure in our data we are likely to
underestimate some other figure thereby correcting the problem. But this is not
always the case. In some systems, error margins can compound each other. This
becomes a serious issue where there is a risk of confirmation bias because, by selecting, say, the highest values in
each range we can produce a very severe “worst case scenario” which is quite
unrealistic compared to taking the median values in the ranges.
Consider the following:
The average temperature of a region in 1960 is calculated as 21o (plus or minus 2o). In 1970 the temperature in the same regions are calculated as 23o (plus or minus 2o). The researcher concludes that temperatures have risen 2 degrees. However, if the actual temperature in 1960 was at the top of the range at 23o and the actual temperature in 1970 was at the bottom of the range at 21o, it means temperatures have actually fallen 2 degrees.
This is a simple example but the problem of error margins applies equally to large scale long-term calculations of global temperatures. The manipulation of ranges is apparent in many of the earlier findings of the IPCC.
The problem
of error margins – which is related to issues of randomness – leads onto:
Significance Issues
In any
testing process, there is always a chance of outcomes that are the result
of chance. In our case above of
testing a new drug, if we gave the drug to a sample of 20 people and 12 of them
got better, compared to 10 people in the control group, we would scarcely be
concluding that our drug was a success. We would want to test on a much larger
group, and do so many times over before we concluded that our drug produced a
20% better rate of cure.
The
significance of any result must always be compared to the normal random
fluctuations of parameters in the target population.
The "significance problem" in global warming theory is that, according to the IPCC, global temperatures over
the last 200 years, have risen by only 1 (one) degree.
Given the
normal random variations of temperatures over millennia, it is very hard to
show that this is a statistically significant result. It is
compounded by the fact that a one degree rise, represents a proportional rise
of one three hundredth - 1/300 - above pre-1800 temperatures. This tiny fraction is smaller than the error margins in the data that
have gone into calculating it.
(Note: in comparing temperatures proportionally and calculating contingent probabilities, remember you have to convert them
to the zeroed Kelvin scale. A rise of 21o to 23oC is a
rise from 294 to 296oK. This puts a rather different complexion on the
ratio of increase.)
Problems of computer modelling
When a
climate scientist makes a prediction such as “at present emission rates, global
temperatures will rise by 2 to 3 degrees by 2080” they are putting forward a
hypothesis. If the world warms by that much in 2080, the hypothesis is
confirmed, if not, it is disproved. The point is that the hypothesis remains a
hypothesis until 2080 when the data is in: there is no way of confirming or
rejecting it before that date. This
of course is not acceptable to people who believe we are facing imminent
destruction, so they have convinced themselves that perhaps we do not have to
wait that long; that perhaps there is a way to prove the accuracy of the
hypothesis right now through calculation:
perhaps there can be a mathematical proof of global warming.
Alas this is
not true.
People have
come to think it is possible because scientists use computers to create
simulations of the Earth’s climate. They feed in all the data we have mentioned
above, satellite readings, terrestrial readings, proxies and so on and this allows
them to look for patterns, discover underlying relationships and, most importantly,
generate future scenarios. By varying the values of different factors they can derive different possible outcomes. These models are very
important in trying to understand climatic processes. The problem, some people
seem to them as a form of data. But
they are not.
Computer climate models are not data
they are just more detailed hypotheses.
Regardless
of how sophisticated computer models are they still need to be calibrated against the actual events. In
other words, we won’t know whether the computer’s algorithms accurately reflect
what will happen in 2080 until 2080. There is no way of short cutting the
testing procedure.
Now, computer modellers know that their predictions are tentative and so they do not
claim absolutely validity so what they do, to give their calculations some credibility,
is attach probability values to
them. These take the form of “if the present rate of emissions continues
there is an 80% chance of a rise in temperatures of between 2 and 4
degrees etc…” This is designed to make the predictions more believable but it
doesn’t because those probabilities are simply an outcome of the modelling
process itself. What they are saying is “we have run this
simulation in our computer 100 times with various values and in 80 of those times the result has been 2-4 degrees
warming.” This is not the same as saying “there is an 80% probability that our
computer simulation is right.”
The only way
a hypothesis can calculate a viable probability is from previous events.
If the
climate scientists could point to multiple times in the past where the Earth
was in its present configuration, where human activity was similar, and greenhouse
gases were increasing at their present rates and show that in 80% of those occasions
temperatures rose by the specified amount, then the probability would have
actuarial validity. But, of course, the Earth never has been in this situation
before. As climate activists love to point out, the present situation is
unprecedented and you cannot generate probabilities for a unique
situation.
However
there is an even more significant and, unfortunately, fundamental
problem in relation to calculating climate change.
Computational Irreducibility
A few years
ago, following the development of Chaos Theory, Stephen Wolfram, the man who
wrote the world’s most used mathematical software Mathematica coined the phrase computational
irreducibility to describe a troubling limitation of mathematics. Wolfram astutely describes mathematics as a “race between humans and the
universe, to calculate events before they happen.” The problem is that there are situations where the events could never be calculated before they actually
happened.
The
principle has serious implications for the testing of hypotheses such as global
warming.
It can be
illustrated as follows:
Imagine you fire
a shell out of a cannon. Even the simplest computer, the chip in a mobile
phone, is capable of calculating where that shell will land and when it will
land there before it gets there.
Now imagine
you throw a leaf into a babbling brook, a small stream that is splashing over
rocks and whirling around reeds as it makes its way down the hillside. No
computer that exists, or possibly ever will exist, can calculate where that leaf will be
20 seconds later.
Calculating
the trajectory and flight time of the shell can be achieved using a small set
of input conditions and some basic Newtonian physics. Plotting the course of the leaf
requires calculating the velocity, direction and atomic forces of trillions of water and air
molecules which are all interacting with each other as well as the leaf. In other words, to find the position of the leaf at the designated time, it is quicker to simply wait and see.
Calculating
the interaction between the factors that govern global temperatures - variations in solar
radiation, CO2 levels, water droplet levels, cloud formation, ocean
currents, air currents, albedo, ozone levels, biological feedback mechanisms,
agricultural developments etc. - is very much the same as calculating all the
factors determining the path of the leaf in the stream only greater. Even if you can
calculate those effects individually, these factors all influence each other.
The upshot
of this is that there is no way to
accurately calculate global climatic events, such as rises in temperature, before those
events occur.
If you want to see whether the predictions of the climate scientists are
accurate or not, you just have to wait.
Now some
people will argue that you can generate accurate predictions by looking at
current circumstances, detecting a trend, and extending that trend into the
future but when you look at the real world it is surprising how few systems that
applies to. Consider the stock market: if the market is rising steeply, can we
conclude that it is going to continue to rise for decades? No. Almost certainly
there is going to be correction soon and the market will fall. Snow piling
up on a ridge will continue until it reaches a critical mass and then an
avalanche will occur. If the leaf in the
stream is sailing along the left hand side will it continue in that direction?
We don't know. A single puff of wind can steer it to the other side. Trends mean
nothing in a chaotic system and the Earth’s atmosphere is the ultimate chaotic system.
Chaos, by
the way, does not mean that the system is unstable or completely
unpredictable. Chaotic systems are still deterministic and often have converging outcomes. Imagine dropping marbles into a round bowl. Each marble will take its own
unpredictable course but all will end up in the bottom of the bowl. Chaos is
characterised by two seemingly opposite principles (1) that small differences
in inputs can produce huge differences in outputs - such as a pinball machine where minute differences in the momentum of the ball produce vastly different and
unpredictable trajectories and (2) different trajectories can converge on
the same final outcome like the marbles in the bowl. Although its route is unpredictable, the leaf
usually, eventually. floats to the bottom of the stream.
The Earth’s
climate is governed by the second of these principles.
Next rave: Some global warming myths dispelled.
No comments:
Post a Comment