The post Martian data confirm Earthly explanation of weather and climate first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Fundamental transitions demand fundamental explanations: here, they should involve solar energy fluxes – after all without the sun, everything would grind to a halt. Surprisingly, it was not until recently that that the transition was explained as the lifetime of planetary sized structures, with the latter being determined solely and directly by the size of the earth and by the solar power input: the “energy rate density” [*Lovejoy and Schertzer*, 2010]. This approach was later extended to the ocean although the corresponding power per mass was one hundred thousand times smaller, with the transition roughly at a year [*Stolle et al.*, 2012].

Although seductive, this theory contradicts a belief held among certain theorists that the large scales are essentially flat – two dimensional. In this view the energy rate density is only relevant at small scales. The new theory can therefore only be plausible if the atmosphere is never completely flat. Indeed, since 1980s an alternative theory has increasingly gained support: that the atmosphere is increasingly stratified at larger scales it is “in between” 3D and 2D; it is 2.55 dimensional [*Schertzer and Lovejoy*, 1985]!

During a discussion last year, Maartan Ambaum (University of Reading) pointed out that if the theory was correct that, it should apply to other planets (and possible Titan). This is where Mars comes in: it has the best extraterrestrial data with which to make a test: if the theory was right, there should be an analogous Martian transition. By taking into account the Martian solar heating and atmospheric thickness, we predicted that the Martian temperature and wind would undergo a transition analogous to the Earth’s but at 1.5 sols (≈1.5 Earth days) rather than a week. Viking Lander data and Martian reanalyses (based on Martian orbiter data) confirmed this prediction quite accurately [*Lovejoy et al.*, 2014]. Now we have a third example of such a transition.

But why expect macroweather and not climate?

The problem is the although fluctuations tend to cancel for periods longer than a week, a year, 1.5 sols (Earth atmosphere, ocean, Mars) – averages tend to converge – at longer scales (the earth 30 – 100 years), they no longer cancel, rather they tend to reinforce each other again, they are again unstable. Since the shorter periods undeniably correspond to our idea of the weather, and the longer periods to the climate, we called the intermediate regime “macroweather” [*Lovejoy and Schertzer*, 2013]: don’t expect the climate, expect macroweather!

**References**

Lovejoy, S., and D. Schertzer (2010), Towards a new synthesis for atmospheric dynamics: space-time cascades, *Atmos. Res.*, *96*, pp. 1-52 doi: doi: 10.1016/j.atmosres.2010.01.004.

Lovejoy, S., and D. Schertzer (2013), *The Weather and Climate: Emergent Laws and Multifractal Cascades*, 496 pp., Cambridge University Press, Cambridge.

Lovejoy, S., J. P. Muller, and J. P. Boisvert (2014), On Mars too, expect macroweather, *Geophys. Res. Lett.*, *in press*.

Schertzer, D., and S. Lovejoy (1985), The dimension and intermittency of atmospheric dynamics, in *Turbulent Shear Flow*, edited by L. J. S. B. e. al., pp. 7-33, Springer-Verlag.

Stolle, J., S. Lovejoy, and D. Schertzer (2012), The temporal cascade structure and space-time relations for reanalyses and Global Circulation models, *Quart. J. of the Royal Meteor. Soc.*, *in press*.

The post Martian data confirm Earthly explanation of weather and climate first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post The Case of the Missing Quadrillion first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>As NOAA acknowledges, their schematic is no more than an update of an iconic graph published at the dawn of the paleoclimate revolution ([*Mitchell*, 1976], see fig. 2) which – given the absence of data at the time – was admitted to be only an “educated guess”. Within fifteen years of its publication, developments in ocean, ice core and other temperature proxies had already shown that it was wrong by a whopping 10 or more orders of magnitude ([*Lovejoy and Schertzer*, 1986], [*Shackleton and Imbrie*, 1990])! Today the full extent of the error in the spectral density is closer to a factor of a quadrillion (in fig. 2 the range over which the actual spectrum varies compared with the roughly flat background).

In spite of this, Mitchell’s figure continues to be reproduced in modern climate reviews and textbooks ([*Dijkstra and Ghil*, 2005], [*Fraedrich et al.*, 2009], [*Dijkstra*, 2013]). And in case a skeptic fails to be convinced, the NOAA site assures us that just “because a particular phenomenon is called an oscillation, it does not necessarily mean there is a particular oscillator causing the pattern. Some prefer to refer to such processes as variability.” Variability has thus been reduced to oscillations, the spectral continuum and its quadrillion has been deleted.

How could the quadrillion be ignored for so long? The answer is that being guided by the wrong “mental picture” is not relevant to practical weather and climate science. These use numerical models that – at least up to 50- 100 years – have roughly the correct spectra so that even with respect to the models, NOAA’s mental picture is wrong by about 5 – 6 orders of magnitude. And we find that practical weather and climate forecasts do indeed treat the spikes as no more than perturbations.

The post The Case of the Missing Quadrillion first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Is Global Warming Just a Giant Natural Fluctuation? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>So what about global warming? Shouldn’t we apply the same statistical methodology and determine the probability of it being natural in origin? If the International Panel on Climate Change (IPCC) is right that it is “extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century” (IPCC, Assessment Report 5, AR5), then surely we should be able to easily statistically reject the hypothesis that the change is due to natural variability? Now for the first time, [*Lovejoy*, 2014] claims to have done this, rejecting the natural warming hypothesis with confidence levels greater than 99% and most likely greater than 99.9%.

In IPCC usage, “extremely likely” refers to a probability in the range 95-100%, so that the new result is quite compatible the AR5, yet the two conclusions are really more complementary than equivalent. Whereas the IPCC focuses on determining how much confidence we have in the truth of anthropogenic warming, the new approach determines our confidence in the falsity of natural variability. As any scientist knows there is a fundamental asymmetry in the two approaches: whereas no theory can ever be *proven* to be true beyond a somewhat subjective “reasonable doubt” – a theory can effectively be *disproven* by a single decisive experiment. In the case of anthropogenic warming, our confidence is based on a complex synthesis of data analysis, numerical model outputs and expert judgements. But no numerical model is perfect, no two experts agree on everything, and the IPCC confidence quantification itself depends on subjectively chosen methodologies. In comparison, the new approach makes no use of numerical models nor experts, instead it attempts to directly evaluate the probability that the warming is simply a giant century long natural fluctuation. While students of statistics know that the statistical rejection of a hypothesis cannot be used to conclude the truth of any specific alternative, nevertheless – in many cases including this one – the rejection of one greatly enhances the credibility of the other.

The new study will be a blow to any remaining climate change deniers since their two most convincing arguments – that the warming is natural in origin, and that the models are wrong – are either directly contradicted by or simply do not apply to the new study. Indeed, by bypassing any use of Global Circulation Models (huge computer models), the new study was able to predict the effective sensitivity of the climate to a doubling of CO_{2} to be: 2.5 – 4.2^{ o}C (with 95% confidence) which is significantly more precise than the IPCC’s GCM based climate sensitivity of 1.5 – 4.5 ^{o}C (“high confidence”) an estimate that – in spite of vast improvements in computers, algorithms and models – hasn’t changed since 1979. Whereas the main uncertainty in the CGM based approach comes from uncertain radiative feedbacks with clouds and aerosols, in the new approach, the uncertainty is due to the poorly discerned time lag between the radiative forcing and atmospheric heating (much of any new heating goes into warming the ocean, and only somewhat later does this warm the atmosphere). Figure 1 shows the unlagged forcing – temperature relationship; one can see that it is quite linear. Even the recent “pause” in the warming (since 1998) is pretty much on the line.

The new approach is based on two innovations. The first is the use globally averaged CO_{2} radiative forcings as a proxy for all the anthropogenic forcings. This is justified by the tight relation between global economic activity and the emission of aerosols (particulate pollution) and Greenhouse gases. Most notably, this allows the new approach to implicitly include the cooling effects of aerosols that are still poorly quantified in GCMs. The second innovation is to use nonlinear geophysics ideas about scaling combined with paleo temperature data to estimate the probability distribution of centennial scale temperature fluctuations in the pre-industrial period. These probabilities are currently beyond the reach of GCM’s. In future developments, the new technique can be used to estimate return periods for natural warming events of different strengths and durations, this includes the post-war cooling as well as the slow down (“pause”) in the warming since 1998.

The global temperature anomaly since 1880 as a function of the anthropogenic forcing (using the CO_{2} heating as a linear surrogate for all the anthropogenic effects). The regression indicates the anthropogenic contribution, the residual is the natural variability. The slope is 2.33 K/CO_{2} doubling, it is the climate sensitivity for the annual averaged global temperature as a function of the annually averaged global radiative forcing for the same year (unlagged).

**Reference:**

Lovejoy, S. (2014), Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, *Climate Dynamics*, *(in press)*.

The post Is Global Warming Just a Giant Natural Fluctuation? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Numerical Weather Models first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>With clouds, what one sees with the naked eye depends not only on the distribution of water droplets and ice particles but also on the lighting, yet it is easy to imagine that other better defined atmospheric structures – such as regions above a certain temperature or wind threshold – might be similarly shaped. In classical turbulence theory, the structures of the atmospheric motions are called “eddies” and for over 40 years the prevailing statistical theory is precisely that the small eddies are 3D, whereas large ones are 2D.

But what is the real dimension of atmospheric motions?

Thirty years ago, on the basis of balloon measurements and scaling symmetries, at two conferences, coauthor Daniel Schertzer and I made the audacious claim that they had the in-between “elliptical” dimension 23/9 = 2.555, [*Schertzer and Lovejoy*, 1983a], [*Schertzer and Lovejoy*, 1983b]. The year was 1983 and the nonlinear chaos and fractal revolutions were in full swing: we were invited to develop the idea further [*Schertzer and Lovejoy*, 1985]. To understand the claim, consider the schematic diagrams in fig. 1 illustrating the idea of “elliptical dimensions”. Think of the cloud “containers” which bound the actual fractal clouds/eddies. How do their volumes change as the horizontal lengths of the containers get larger? For 3D clouds, the volumes would increase with the cube of the horizontal length while for 2D clouds, the thickness would not vary much with horizontal extent and the volume would only increase quadratically. The notion of elliptical dimension simply generalizes this: the container volume is the horizontal extent raised to the power *D** _{el}*. When

Back in 1983, the enthusiasm of the times smiled on this theory of anisotropic scaling turbulence. But in the longer run, it came up against the meteorological community that had embraced Jules Charney’s theory of “quasi-geostrophic turbulence”. This theory was based on the conventional (and ultimately flawed) assumption that small and large scales had totally different dynamics (with respectively 3D and 2D characters). The ensuing thirty years is a classical story of seperate and unequal scientific development. Eschewing fundamental questions such as the dimension of atmospheric motions, the meteorological community focused instead on the rapid development of numerical models. In contrast the virtually unfunded 23/9 D model developed not only more slowly but rather in the nascent and separate nonlinear geophysics community. It wasn’t until 2012 (see [*Pinel et al.*, 2012], that it finally overcame the last argument in favour of geostrophic turbulence. Today, the 23/9 D theory is the only one capable of explaining modern remote and in situ observations. These developments are among the primary subjects of the book [*Lovejoy and Schertzer*, 2013] and more of this story was told in another blog on aircraft data.

If reality is 23/9D and the numerical models are realistic, then the models should reflect this fact. Indeed, detailed analyses of the internal statistical structure of the models confirms that they display the predicted types of multifractal cascades [*Stolle et al.*, 2009]; but this only indirectly supports the 23/9D picture. However, so far no one has noticed that the entire historical development of numerical weather models supports a 23/9 dimensionality! Think of the containers. For the model to be optimally adjusted so as to contain the largest eddies, its vertical extent (in model levels) should be close to the 5/9 power of the horizontal extent (in pixels) so that overall the number of degrees of freedom (model elements) is roughly the 23/9 power of the number of horizontal pixels.

In the developping of a numerical weather model, the horizontal resolution is generally fixed by practical (mostly computer) constraints, however some experimentation is needed to determine the optimum number of vertical levels. Therefore, if we plot the number of vertical levels as a function of the number of horizontal points, we should expect it to at least roughly follow a 5/9 power law. Fig. 2 – taken from an admittedly somewhat subjective sampling of historically significant models – confirms this idea surprisingly well.

Whatever one’s opinion of the 23/9 D picture of atmospheric motions, it has left it’s mark!

**References**

Lovejoy, S., and D. Schertzer (2013), *The Weather and Climate: Emergent Laws and Multifractal Cascades*, 496 pp., Cambridge University Press, Cambridge.

Pinel, J., S. Lovejoy, D. Schertzer, and A. F. Tuck (2012), Joint horizontal – vertical anisotropic scaling, isobaric and isoheight wind statistics from aircraft data, *Geophys. Res. Lett.*, *39*, L11803 doi: 10.1029/2012GL051698.

Schertzer, D., and S. Lovejoy (1983a), Elliptical turbulence in the atmosphere, paper presented at Fourth symposium on turbulent shear flows, Karlshule, West Germany, Nov. 1-8, 1983.

Schertzer, D., and S. Lovejoy (1983b), On the dimension of atmospheric motions, paper presented at IUTAM Symp. on turbulence and chaotic phenomena in fluids, Kyoto, Japan.

Schertzer, D., and S. Lovejoy (1985), The dimension and intermittency of atmospheric dynamics, in *Turbulent Shear Flow 4*, edited by B. Launder, pp. 7-33, Springer-Verlag.

Stolle, J., S. Lovejoy, and D. Schertzer (2009), The stochastic cascade structure of deterministic numerical models of the atmosphere, *Nonlin. Proc. in Geophys.*, *16*, 1–15.

The post Numerical Weather Models first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Aircraft Data: Not What You Think first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Aircraft measurements are often our only direct source of data about the variability of the wind, temperature, humidity and other atmospheric variables in the horizontal direction.

The aircraft’s sudden “transitions from quiescence to chaos” – from apparently smooth to chaotic conditions – is a defining feature of the turbulent notion of “intermittency” (see fig. 1). Although first described as “spottiness”, in laboratory flows [*Batchelor and Townsend*, 1949] it reflects to the ubiquitous property of the atmosphere to concentrate almost all its activity in regions occupying only a tiny fraction of the whole – for example in storms and even in the centers of storms. Since the 1980’s – and largely thanks to the development of the multifractal cascade models reviewed in WC – there has been dramatic progress in understanding intermittency. It is now clear that by its very nature, turbulence is intermittent: as we zoom in we find violent regions in proximity to ones of relative calm. Intermittency is evidenced by the occasional sharp “jumps” in the wind (fig. 1a), associated with high levels of turbulent energy flux (fig. 1b). However, examination of the apparently calm regions shows that they also have embedded regions of high activity and as we zoom into smaller and smaller regions this strong heterogeneity continues in a scaling manner until we reach the dissipation scale. This explains why aircraft measurements of the wind invariably find roughly Kolmogorov type (i.e. turbulent) statistics even in apparently calm regions of the atmosphere. This is hardly surprising since the ratio of the nonlinear terms in the dynamical equations to the linear (dissipative) terms (the Reynold’s number) is typically huge (≈10^^{12}). In any event, large scale regions of true laminar flow have yet to be documented by actual measurements. On the contrary, the multifractal, multiplicative cascade picture has been well verified even at large scales (see WC, ch. 4). Therefore, it would be a mistake to separate these regions of high and low “turbulent intensities” and associate them with different mechanisms.

Other blogs will discuss the consequences of the intermittency for classical notions and theories such as stable atmospheric layers and linear wave theories. Here, I wish to report on a more immediate concern: the fact that aircraft measurements are often our only direct source of data about the variability of the wind, temperature, humidity and other atmospheric variables in the horizontal direction, yet the very turbulence they seek to measure, analyze and characterize modifies the trajectories and biases the results in ways that we are only now starting to understand.

There are two effects that we can note in fig. 1 a. The first is that although the aircraft stayed within 0.1% of a constant pressure level, that the altitude nevertheless is quite variable. First, it is highly intermittent (fig. 1b) indeed it is fractal (even multifractal, [*Lovejoy et al.*, 2009a]): although fortunately for the passengers at scales below 1 – 2 km, the aircraft inertia does tend to smoothed this out (otherwise in the small scale fractal limit, the accelerations would be infinite, [*Lovejoy et al.*, 2004]!).

Second, in addition to its intermittency, the trajectory is not “flat”, but wanders up and down. Both effects bias the measurements. We shouldn’t be surprised that the constant up and down intermittent “jiggling” of the aircraft leads to underestimates of the exponents characterizing the intermittency of the wind (and other fields), indeed, these are increased by roughly 0.02 to 0.03. However, it turns out that these intermittency corrections turn out to be much less of a worry than the departures from constant levels. This is because the atmosphere is highly stratified, moving up and down a bit can lead to much larger variations in the wind than simply moving along in the horizontal direction. Due to this effect, exponents characterizing the mean wind fluctuations could be off by as much as 0.3.

Larger and larger atmospheric structures become flatter and flatter at larger and larger scales, but that they do so in a scaling (power law) way. Contrary to the postulates of the classical 3D/2D model of isotropic turbulence, there is no drastic scale transition in the atmosphere’s statistics. However, since the famous *Global Atmospheric Sampling Program* (GASP) experiment (fig. 2) there have been repeated reports of drastic transitions in aircraft statistics (spectra) of horizontal wind typically at scales of several hundred kilometers. We are now in a position to resolve the apparent contradiction between scaling 23/9D dynamics and observations with broken scaling. At some critical scale – that depends on the aircraft characteristics as well as the turbulent state of the atmosphere – the aircraft “wanders” sufficiently off level so that the wind it measures changes more due to the level change than to the horizontal displacement of the aircraft. It turns out that this effect can easily explain the observations. Rather than a transition from characteristic isotropic 3D to isotropic 2D behavior (spectra with transitions from k^^{-5/3} to k^^{-3} where k is a wavenumber, an inverse distance), instead, one has a transition from k^^{-5/3} (small scales) to k^^{-2.4} at larger scales (fig. 2), the latter being the typical exponent found in the vertical direction (for example by dropsondes, [*Lovejoy** et al.*, 2009b]).

Since the 1980’s, the wide range scaling of the atmosphere in the both the horizontal and the vertical was increasingly documented; many examples are shown in WC, ch. 1. By around 2010, the only remaining empirical support the 3D/2D model was the interpretation of fig. 2 (and others like it) in terms of a “dimensional transition” from 3D to 2D. These interpretations were already implausible since a re-examination of the literature had shown that the large scales were closer to k^^{-2.4} than k^^{-3}, as expected due to the “wandering” aircraft trajectories. Finally, just last year, with the help of ≈14500 commercial aircraft flights with high accuracy GPS altitude measurements, it was possible for the first to determine the typical variability in the wind in vertical sections, and this was almost exactly the predicted 23/9=2.555… value: the measured “elliptical dimension” being ≈2.57. It is hard to see how the 3D/2D model can survive this finding.

So next time you buckle up, celebrate the fact that the turbulence you feel is still stimulating scientific progress!

**References:**

Batchelor, G. K., and A. A. Townsend (1949), The Nature of turbulent motion at large wavenumbers, *Proceedings of the Royal Society of London*, *A 199*, 238.

Gage, K. S., and G. D. Nastrom (1986), Theoretical Interpretation of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft during GASP, *J. of the Atmos. Sci.*, *43*, 729-740.

Lovejoy, S., and D. Schertzer (2013), *The Weather and Climate: Emergent Laws and Multifractal Cascades*, 496 pp., Cambridge University Press, Cambridge.

Lovejoy, S., D. Schertzer, and A. F. Tuck (2004), Fractal aircraft trajectories and nonclassical turbulent exponents, *Physical Review E*, *70*, 036306-036301-036305.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/PREtraj.final36306-1.pdf

Lovejoy, S., A. F. Tuck, D. Schertzer, and S. J. Hovde (2009a), Reinterpreting aircraft measurements in anisotropic scaling turbulence, *Atmos. Chem. Phys. Discuss., *, *9*, 3871-3920.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/acp-2008-0616-final.pdf

Lovejoy, S., A. F. Tuck, S. J. Hovde, and D. Schertzer (2009b), The vertical cascade structure of the atmosphere and multifractal drop sonde outages, *J. Geophy. Res.*,* 114*, D07111, doi:07110.01029/02008JD010651.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/jgr.vertical.cascades.final.2008JD010651-2.pdf

Pinel, J., S. Lovejoy, D. Schertzer, and A. F. Tuck (2012), Joint horizontal – vertical anisotropic scaling, isobaric and isoheight wind statistics from aircraft data, *Geophys. Res. Lett.*, *39*, L11803 doi: 10.1029/2012GL051698.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Pinel.GRL.2012GL051689.final2.pdf

The post Aircraft Data: Not What You Think first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Expect Macroweather first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>While all this sounds reasonable, it turns out that it is not based on analysis of real world data. In recent papers “What is climate?” and “The climate is *not* what you expect” ([*Lovejoy*, 2013], [*Lovejoy and Schertzer*, 2013]) we used a new kind of “fluctuation analysis” to show that – at least up to 100,000 years – that that there are not two, but rather three atmospheric regimes each with different types of variability. In between the weather and the climate, there is a new intermediate “macroweather” regime. The time scale characteristic of the weather regime is about 10 days, it is determined by the lifetime of planetary sized structures, itself determined by the (solar forced) energy flux. The time scale of the climate regime is determined by the scale at which slow climate processes begin to dominate the weather processes which become weaker and weaker at longer scales. In the industrial period, the latter are dominated by anthropogenic effects and the transition scale is about 30 years, in the preindustrial period, the forcing is smaller and it is closer to 100 years.

What does it mean to define a regime by its type of variability? An illustration makes it intuitive. Consider fig. 1 which shows examples from weather scales (0.067 s, 1 hour resolution, bottom two), macroweather (20 days, second from the top) and climate (1 century, top). For the middle two, the daily and annual cycles were removed and for each, 720 consecutive points are shown so that the differences in the characters of each regime are visually obvious. The bottom two weather curves – whose characters are very similar to each other – “wander” up or down, resembling drunkard’s walks, typically increasing over longer and longer times periods. In the second from the top, the macroweather curve has a totally different character: upward fluctuations are typically followed by nearly cancelling downward ones (and visa versa). Averages over longer and longer times tend to converge, apparently vindicating “the climate is what you expect” idea: we anticipate that at decadal or at least centennial scales that averages will be virtually constant with only slow, small amplitude variations. However (top) the century scale climate curve displays again a weather-like (wandering) variability. Although this plot shows temperatures, other atmospheric fields (wind, humidity, precipitation, etc.) are similar (although for these fields there are no high quality paleo proxies).

There are thus three qualitatively different regimes, a fact was first recognized in the 1980’s and that has been confirmed several time since, but whose significance has not been appreciated [*Lovejoy and Schertzer*, 1986], [*Pelletier*, 1998], [*Huybers and Curry*, 2006]. The old analyses were based on analyzing temperature differences and spectra, and these suffered from various technical limitations and from difficulties in interpretation. The new technique works by defining fluctuations over a give time interval by the difference of averages over the first and the second halves of the interval (“Haar” fluctuations); it is thus very easy to interpret. In the “wandering” weather and climate regimes, the averaging in the definition isn’t important: fluctuations are essentially differences. In the cancelling “macroweather” regime, the differences aren’t important, the fluctuations are essentially averages. Whereas in the weather and climate regimes, fluctuations tend to increase with time scale, in the macroweather regime, they tend to decrease. For example, over GCM grid scales (a few degrees across), the average fluctuations increase on average up to about 5^{o}C until about 10 days. Up until about 30 years they tend to decrease to about 0.8^{o }C, and then – in accord with the amplitude and time scale of the ice ages – they increase again up to about 5^{o }C at ≈100,000 years (fig. 2). In macroweather, averages converge: “macroweather is what you expect”. In contrast, the “wandering” climate regime is very much like the weather so that – at least at scales of 30 years or more – the climate is “what you get”. Conveniently, we see that the choice of 30 year time period to define the climate normal can now be justified as the time period over which fluctuations are the smallest.

In a nutshell, average weather turns out to be macroweather – not climate – and climate refers to the slow evolution of macroweather. This evolution is the result of external forcings (solar, volcanic etc.) coupled with natural (internal) variability: i.e. forcings with feedbacks. In the recent period, we must add anthropogenic forcings.

Why “macroweather”, and not “microclimate”? It turns out that when GCM’s – which are essentially weather models with extra couplings (to oceans, sea ice etc.) are run in their “control” modes (i.e. with constant atmospheric composition, solar output, no volcanoes etc.) – that fluctuation and other analyses show that they well reproduce both the weather and the macroweather types of wandering and canceling behaviours, so that macroweather is indeed long term weather. To obtain the climate they must at least include new climate forcings, but probably also new internal climate mechanisms; a recent fluctuation study shows that the multicentennial variability of GCM simulations over the period 1500-1900 are somewhat too weak [*Lovejoy et al.*, 2012].

(this blog post is adapted from “Macroweather, not climate, is what you expect” posted on Climate Etc. in January 2013)

**References**

Heinlein, R. A. (1973), *Time Enough for Love*, 605 pp., G. P. Putnam’s Sons, New York.

Huybers, P., and W. Curry (2006), Links between annual, Milankovitch and continuum temperature variability, *Nature*, *441*, 329-332 doi: 10.1038/nature04745.

Lovejoy, S. (2013), What is climate?, *EOS*, *94, (1), 1 January*, p1-2.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Macroweather.EOS.17.10.12.pdf

Lovejoy, S., and D. Schertzer (1986), Scale invariance in climatological temperatures and the spectral plateau, *Annales Geophysicae*, *4B*, 401-410.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Annales.Geophys.all.pdf

Lovejoy, S., and D. Schertzer (2013), The climate is not what you expect, *Bull. Amer. Meteor. Soc.*, *(in press)*.

http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/climate.not.13.12.12.pdf

Lovejoy, S., D. Schertzer, and D. Varon (2012), Do GCM’s predict the climate…. or macroweather?, *Earth Syst. Dynam. Discuss.*, *3, *, 1259-1286 doi: 10.5194/esdd-3-1259-2012.

Pelletier, J., D. (1998), The power spectral density of atmospheric temperature from scales of 10**-2 to 10**6 yr, , *EPSL*, *158*, 157-164.

The post Expect Macroweather first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post International World Wind Championships first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>That face-saving answer avoided more interesting questions: if Barrow was such a clear winner, why was a committee needed – and did Barrow really win? The stories told so far describe how the champion was overlooked for nearly a decade and the work of the committee checking that the equipment was calibrated and properly functioning (http://blog.ametsoc.org/uncategorized/mt-washingtons-world-record-wind-toppled/). In other words, the problem had been reduced to comparing two speeds and deciding whether or not the numbers were reliable.

Yet wind is turbulent, it constantly varies over huge ranges of scales – from milliseconds and millimetres on up: what is it’s speed? Indeed, when considering the highly intricate “Wierstrasse function-like” (i.e. fractal-like) structure of the wind, pioneer Lewis F. Richardson asked the question “does the wind possess a velocity?”, continuing: “this question, at first sight foolish, improves upon acquaintance” [*Richardson*, 1926]. For any site, a short burst of wind is far more likely to reach record speeds than longer averages. There is thus a clear dependence of both typical and extreme speeds on the resolution of the wind record. Unless this is taken into account, it’s the equivalent of a duel between lightweights and heavyweights.

Conventionally the resolution issue is handled by recognizing a distinction between “sustained winds” and “gusts”. Both terms are fairly subjective: sometimes a “sustained” wind is defined as one that is exceeded for a certain duration, sometimes as an average (usually over one minute). When it comes to gusts, the US National Weather Service even gives *two* definitions: the traditional “gusts are reported when the peak wind speed reaches at least 16 knots and the variation in wind speed between the peaks and lulls is at least 9 knots. The duration of a gust is usually less than 20 seconds” as well as the one used by the WMO committee: a gust is a 3 second average. Since officially, the duration of a sustained wind can vary (from one minute for tropical cyclones up to ten minutes for determining Beaufort sea states), various rules of thumb have evolved. For example the US Navy gives a formula to translate from sustained winds at one minute to sustained winds at ten minutes (multiply by 0.88). This turns out to be close to the theoretical cascade prediction for the decadal scale change for wind statistics near the mean: 0.82 (using the empirical value of the exponent C_{1}, see table 8.1 [*Lovejoy and Schertzer*, 2013]; note that this is not the correct formula for extremes!).

From the cascade point of view, the resolution dependence is fundamental yet straightforward to understand and quantify. At a given space scale the variability depends directly on the ratio of the planet scale (where the cascade starts) to that of the measurement. Since the cascade builds up scale by scale, the wider the scale range, the higher the variability. When averaging in time, the relevant ratio is that of the lifetime of planetary structures (about 10 days, see [*Lovejoy and Schertzer*, 2013], ch. 8) to the measurement resolution scale. Since the measurement scale depends on the observer – the appropriate measure of the wind is a resolution invariant wind “singularity” which allows winds at different scales to be objectively compared.

So what about the Mt. Washington – Barrow contest? According to [*Pagluica*, 1934]), the 1934 measurement was of an average wind of 231 miles per hour over a distance of 0.3 miles. This translates into 103 *m*/*s* over a period of 4.7 *s* i.e. with respect to the Barrow record, it is a “long gust” with a duration 57% longer than the official 3s. Converting the speeds into singularities (g), we find for the 3s Barrow gust g = 0.449 which is very slightly above the Mt. Washington 4.7s gust value (g = 0.445). However, both the Mt. Washington and Barrow events contained measurements of sustained winds at other resolutions. For example, we are told that the Mt. Washington gust was embedded in a particularly windy hour whose mean was 69.3 *m*/*s*. During this hour, an extreme 5 minute average of 84.0 *m*/*s* was obtained as well as an extreme one minute average of 85.8 *m*/*s* and a 17 *s* average of 93.9 *m*/*s*. Similarly, the Barrow gust occurred within a 5 minute interval whose average wind speed was 48.8 *m*/*s*.

For the 5 minute Barrow average we find the disappointing value g = 0.409 substantially below the corresponding Mt. Washington value g = 0.479. On this basis, Barrow does indeed get the gold in the gust championship, but Mt. Washington takes the title in the sustained wind championship (indeed, the one hour Mt. Washington value g = 0.510 is even more impressive)!

But this still isn’t fair: whereas Mt. Washington has been recording winds since 1874, the Barrow records only go back to 1932. How can we compensate for the disadvantage of having a shorter record? For this, one needs not only to remove the resolution dependency from the speeds, but also from the probability distributions. This is done using codimensions. Converting probabilities to codimensions (fig. 8.9a, [*Lovejoy and Schertzer*, 2013]), we find that all these records are indeed close to what are expected; there is no clear deviation, no clear winner at all!

**References:**

*The Weather and Climate: Emergent Laws and Multifractal Cascades*, 496 pp., Cambridge University Press, Cambridge.

Pagluica, S. (1934), The Great Wind Of April 11-12, 1934, On Mount Washington, N.H., And Its Measurement, Part I: Winds Of Superhurricane Force, And A Heated Anemometer For Their Measurement During Ice Forming Conditions, *Mon. Wea. Review*, *62*, 186-189.

Richardson, L. F. (1926), Atmospheric diffusion shown on a distance-neighbour graph, *Proc. Roy. Soc.*, *A110*, 709-737.

The post International World Wind Championships first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>