x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
16
Apr
2014

Is Global Warming Just a Giant Natural Fluctuation?

Shaun Lovejoy

When estimating voter’s intentions, pollsters know that statements like “40% of the voters support party A”: will nearly always be wrong. However, when qualified with: “19 times out of 20, this percentage is correct to within 5%”, then the statement may be exactly correct. The qualification “19 times out of 20” is a standard statement of the confidence that the true support is in the range 35 to 45% of the voters, it is estimated from elementary statistical theory.

So what about global warming? Shouldn’t we apply the same statistical methodology and determine the probability of it being natural in origin? If the International Panel on Climate Change (IPCC) is right that it is “extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century” (IPCC, Assessment Report 5, AR5), then surely we should be able to easily statistically reject the hypothesis that the change is due to natural variability? Now for the first time, [Lovejoy, 2014] claims to have done this, rejecting the natural warming hypothesis with confidence levels greater than 99% and most likely greater than 99.9%.

In IPCC usage, “extremely likely” refers to a probability in the range 95-100%, so that the new result is quite compatible the AR5, yet the two conclusions are really more complementary than equivalent. Whereas the IPCC focuses on determining how much confidence we have in the truth of anthropogenic warming, the new approach determines our confidence in the falsity of natural variability. As any scientist knows there is a fundamental asymmetry in the two approaches: whereas no theory can ever be proven to be true beyond a somewhat subjective “reasonable doubt” – a theory can effectively be disproven by a single decisive experiment. In the case of anthropogenic warming, our confidence is based on a complex synthesis of data analysis, numerical model outputs and expert judgements. But no numerical model is perfect, no two experts agree on everything, and the IPCC confidence quantification itself depends on subjectively chosen methodologies. In comparison, the new approach makes no use of numerical models nor experts, instead it attempts to directly evaluate the probability that the warming is simply a giant century long natural fluctuation. While students of statistics know that the statistical rejection of a hypothesis cannot be used to conclude the truth of any specific alternative, nevertheless – in many cases including this one – the rejection of one greatly enhances the credibility of the other.

The new study will be a blow to any remaining climate change deniers since their two most convincing arguments – that the warming is natural in origin, and that the models are wrong – are either directly contradicted by or simply do not apply to the new study.  Indeed, by bypassing any use of Global Circulation Models (huge computer models), the new study was able to predict the effective sensitivity of the climate to a doubling of CO2 to be: 2.5 – 4.2 oC (with 95% confidence) which is significantly more precise than the IPCC’s GCM based climate sensitivity of 1.5 – 4.5 oC (“high confidence”) an estimate that – in spite of vast improvements in computers, algorithms and models – hasn’t changed since 1979. Whereas the main uncertainty in the CGM based approach comes from uncertain radiative feedbacks with clouds and aerosols, in the new approach, the uncertainty is due to the poorly discerned time lag between the radiative forcing and atmospheric heating (much of any new heating goes into warming the ocean, and only somewhat later does this warm the atmosphere). Figure 1 shows the unlagged forcing – temperature relationship; one can see that it is quite linear. Even the recent “pause” in the warming (since 1998) is pretty much on the line.

The new approach is based on two innovations. The first is the use globally averaged CO2 radiative forcings as a proxy for all the anthropogenic forcings. This is justified by the tight relation between global economic activity and the emission of aerosols (particulate pollution) and Greenhouse gases. Most notably, this allows the new approach to implicitly include the cooling effects of aerosols that are still poorly quantified in GCMs. The second innovation is to use nonlinear geophysics ideas about scaling combined with paleo temperature data to estimate the probability distribution of centennial scale temperature fluctuations in the pre-industrial period. These probabilities are currently beyond the reach of GCM’s. In future developments, the new technique can be used to estimate return periods for natural warming events of different strengths and durations, this includes the post-war cooling as well as the slow down (“pause”) in the warming since 1998.

Image 1

The global temperature anomaly since 1880 as a function of the anthropogenic forcing (using the CO2 heating as a linear surrogate for all the anthropogenic effects). The regression indicates the anthropogenic contribution, the residual is the natural variability. The slope is 2.33 K/CO2 doubling, it is the climate sensitivity for the annual averaged global temperature as a function of the annually averaged global radiative forcing for the same year (unlagged).

 

Reference:

Lovejoy, S. (2014), Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, Climate Dynamics, (in press).

 

 

 

About The Author

Shaun Lovejoy

Shaun Lovejoy is the co-author of The Weather and Climate: Emergent Laws and Multifractal Cascades (2013). Lovejoy is Professor of Physics at McGill University, Montréal, and has ...

View profile >
 

Latest Comments

Have your say!