Senior Marketing Executive, Cambridge University Press

How much do you know about the influence of mathematics and statistics? April is Mathematics and Statistics Awareness Month, so we thought we would share a quick snapshot…

You probably know that secure online shopping and private messaging on your mobile or cell phone would not be possible without something called public key cryptography. But did you know it was based on a branch of mathematics called number theory? Film streaming and online gaming would be impossible without communications theory and signal processing, which employs an area of mathematics called combinatorics.

Meanwhile, the ongoing COVID-19 pandemic has made many of us sadly familiar with the statistical tool called the R number. On the same theme, an equation called Bayes’ Rule can be used to work out the accuracy of COVID test results.

Then there is the discipline called operations research (OR) – sometimes called management research. Essentially, it’s the science of making things work smoothly. It uses a combination of mathematical modelling, optimization and statistics alongside disciplines like organization studies and psychology to address logistical challenges such as the surprisingly complex problem of managing elevator usage.

**Solutions for real-world
problems**

Mathematics is essential in answering many complex questions we find in the real world. For instance, we rely on mathematics to model the Earth’s climate. Mathematics and statistics are used to study many aspects of the natural world, such as in life science and in topics like geophysics. Plus, let’s not forget epidemiology, which uses statistical tools, such as the R number mentioned above, to model the spread of diseases.

**Going with the flow**

Understanding the ways fluids behave in different situations is crucial to many applications in engineering, chemistry, physics and biology. For instance, it is our understanding of fluid dynamics that lets us build planes that fly, and create hydraulic brakes that stop cars. It even helps us understand how the human heart works.

**Know when to fold ‘em**

Who would have thought that the seemingly obscure mathematics of folding in origami has applications in engineering, biochemistry (protein folding) and aeronautics (unfolding solar panels in space)? All this from an area of mathematics that might have seemed, at first, to have little value beyond academic interest.

**Machine learning turning
fiction into fact**

Recently, in a spooky development that could have come straight from the Harry Potter movies, machine learning techniques (with mathematics at their heart) have made it possible to animate photographs and make them ‘come to life’ – a bit like those grumpy paintings at Hogwarts. Another application is the increasing power of online tools such as Google Translate, which uses a technique known as natural language processing to give almost instant language translation. Granted, it’s not always perfect, but less than a generation ago, this would all have seemed like science fiction.

**Economic and Finance models**

Economists, businesses and financial
organisations like insurance companies use mathematics and statistics
to carry out data analysis, build financial models (such as for financial
markets) and support decision
making. One of the tools used particularly in economics is game
theory, which is a slick mathematical method for complex decision making. It’s
worth noting that the 2020 Nobel prize for Economics was awarded to researchers
in game theory, 26 years after John Nash was also awarded the prize for his
work on game theory, as dramatized in the film, *A Beautiful Mind*.

**Psychology and social
science**

Statistics is essential to psychology research for a number of reasons, not least because it lets researchers assess the significance of the results obtained from experiments that often involve many participants. Without the tools of statistics it would be very difficult to see patterns in such large amounts of data. And it’s not just psychology. Every other social science, such as sociology, relies on statistics to make sense of experiments. If you are dealing with so-called big data (on the worldwide web for example) then you can also employ machine learning and pattern recognition techniques that are – you guessed it – based on mathematics and statistics.

**Here, there and
everywhere**

The influence of mathematics and statistics can be found almost everywhere you look, from the online translation tools of Google, to the design of airplanes, from climate models and weather prediction to solar panels on satellites and the smooth running of elevators. Mathematics and statistics are essential to the modern world, and to understanding everything in it.

**Find out more**

New to mathematics and want to learn more? Take a look at *Quantitative Reasoning*, a book that
helps readers think mathematically about real-world questions.

**Related Content from
Cambridge University Press**

Communications and Signal Processing

Multimedia Fluid Mechanics Online (an undergraduate teaching tool)

Origametry – the Mathematics of Paper Folding

]]>The book sets out the mathematical content of the breakthroughs, with all of the details but not those of the work based on Deligne’s solution to the Weil conjectures. Those would be for a different book, maybe one on the Bombieri-Vinogradov theorem and its extensions and applications. For the expert, striving to improve the best bound 246, most of this material will be familiar. However the main target audience is beginning researchers, for example graduate students. I have vivid memories of my time at Columbia having to scrap with other grad students for important books held behind the library desk. One could have these only for one hour at a time, completely insufficient for understanding a major proof. To assist this group of potential readers the appendices contain proofs for supporting mathematics such as the spectral theorem for compact operators, Weil’s inequality for curves modulo primes, Bessel functions, the Shiu-Braun-Titchmarsh estimate etc. I have tried to simplify this material down to only what is essential for the work in the chapters, and these have been simplified down to only what is essential for the breakthroughs. But it’s certainly not simple!

Along the way there appeared to be many ways in which the results could be improved. However I did not tarry since after starting, the worst outcome would be for the work not to be completed. Having completed the work, others it is hoped will find paths to take it forward, with or without the text. For this writer, there are other pressing tasks and the Erdos life time limit is not so far off.

What the book is not: it’s not an account of the breakthroughs as a human endeavour. That would be a different book. There is the odd comment here and there which would qualify and some highly abbreviated biographical paragraphs. It is this author’s hope that such a book will be written, and soon before the individual and collective memory of events fades. To this end, on the book’s web page there is a link to the “backstory”, a web page containing an annotated series of time-lines and links to sources which might inspire someone to write up the human story with an absolute minimum of mathematical detail. Because what happened and especially the way it happened is unique, I would say in the entire history of mathematics, an account of the human side of the developments, in the hands of someone with suitable skills and experience, would be of interest I believe to a very wide audience.

As usual mathematical arguments are often difficult to follow and I needed help. This was generously provided especially by Pat Gallagher, Dan Goldston, Yoichi Motohashi and Terry Tao. I was not able to obtain a reply from Yitang Zhang, in spite of repeated requests, other to be sent his image. In the end I did not include more than a summary account of the proof of his extension to Bombieri-Vinogradov’s theorem – a full report of his proof, or better that of Polymath8a, would be part of the other potential book mentioned before. In any case, Maynard, Tao and Polymath8b went so much further than Zhang with their multidivisor/multidimensional method, an approach which seems both accessible and able to be improved.

Which brings me to my final remark: where to next in the bounded gaps saga? As hinted before, the structure of narrow admissible tuples related to the structure of multiple divisors of Maynard/Tao, and variations of the perturbation structure of Polymath8b, and of the polynomial basis used in the optimization step, could assist progress to the next target. Based on “jumping champions” results, this should be 210. But who knows!

]]>Everyone knows that *The* *Principia* was based on the inspiration that struck Newton when the apple struck his head, as you can see from the cartoon above. The thought that passed through his head was as follows:

“Clearly the earth attracts the apple in the same way that it attracts the moon, and the force very likely obeys the inverse square law. I can check this by calculating the acceleration of the moon towards the earth, as determined by its orbit and the length of a synodic month.

“But the moon does not orbit about the centre of the earth but about the barycentre of the earth-moon system. To calculate this I need to know the mass of the moon. What difference would it make if the moon became twice as dense? The tides would become stronger. I can compare the strength of the lunar and solar tides, and hence compare the density of the moon with the density of the sun, and I can compare the density of the sun with the density of the earth as they both have satellites. Now I need pencil and paper…”

Newton’s *anni mirabiles* were 1665-1667, when he was twenty-two to twenty-four years old. In 1666 the University of Cambridge went into lockdown because of the plague, and he retreated to his home base in Lincolnshire to think. This is when his theory of gravity, and so much more, was developed, and the semi-mythical apple fell from the tree.

The current Covid crisis has killed more people than the plague of 1666, which none the less is thought to have killed a quarter of the population of London. The current lockdown leaves academics with vast electronic resources, whereas in Newton’s day there was nothing to do but to think. Is there a lesson to be learned?

He was an outstanding mathematician, physicist, astronomer, historian, theologian, and (as master of the mint) civil servant. But Voltaire, who thought so highly of him, and was the lover of the great Émilie du Châtelet, the only translator of *The Principia* into French, asserted that he was so famous in England because he had a beautiful niece. Princes of Italy came to England in order to set eyes on him, and, for all I know, on his niece.

Reverting to the apple/moon comparison, a great variety of simple ideas come into play, and I intend to concentrate on simple ideas, rather than on the technical details. For now, I ask you some simple questions concerning the tides.

- The sun attracts the earth far more strongly than does the moon. The earth rotates about the sun, not about the moon. So why does the moon cause greater tides on the earth than does the sun?
- As the earth spins on its axis, the moon reaches its highest point in the sky, when it attracts the sea most strongly, approximately once every 24 hours. But we get a high tide approximately once every 12 hours. Why is this?
- Why only approximately every 12 hours?
- Some high tides are higher than others. What other factors may contribute to these discrepancies? Ignoring the weather, you are doing well if you can think of five.

If you are new to these ideas you are in much the same position as Newton, who (it seems) never saw the sea, but sat in his garden thinking.

I should perhaps mention that I am an emeritus professor of pure mathematics at Queen Mary, University of London. I work in algebra, and have had to learn much in order to understand the great breadth of Newton’s masterpiece.

I hope that my understanding has been sufficient for the purposes of the task I have undertaken. I was moved to produce my translation by my feeling that the Cohen-Whitman translation was too opaque, and based on an inadequate understanding of the text. I detail my attitude to their work in the preface to my translation, and acknowledge the help I have received from many people.

I should repeat here my gratitude to Carl Murray, who persuaded me to have my translation published and produced the diagrams, to Wolfram Neutsch, who read the entire manuscript, and saved me from some embarrassing errors, to Niccolò Guicciardini for much learned assistance, and to David Tranah of Cambridge University Press, who fortunately insisted on setting the translation in a bright modern style. I am grateful for his hard work and professionalism which also saved me from a number of errors.

The online annotated translation of *The Principia* (www.17centurymaths.com) by Ian Bruce unfortunately did not come to my attention until my translation was in the hands of C.U.P.

There are cynical reasons for the perpetual arguments—someone benefits. But cynicism aside, is there something we can do to improve public discussion and make positive progress?

The more frankly we interrogate our data, assigning appropriate error limits, the more we can learn from them.

In my engineering classes, I spend quite a bit of time
encouraging students to observe and note down the process of learning. There are things to learn in class, of
course, but it would be a shame not to use the opportunity of learning something
new to teach ourselves about *how* one learns something new. What is that process? Well, we confront a new situation, we ask
questions, we try out solutions or pathways to solution, and we pick something
to try. Then, we see if it works. If it works, we win! If it does not work, we review how we got to
the non-working solution, and we try something else. We are only finished when we find an answer
that works.

How does this anecdote help us to address the problem of stalled public discourse?

Let’s compare public discourse to the STEM classroom. We start with a topic, which is usually something wrong in society that someone asserts needs to be addressed or changed. Let’s use climate change as an example. According to my description of the classroom process, once the topic has been raised, the next step is to ask questions and identify solutions and pathways to solution. That works for public discussion of climate change. The next step is to pick something to try. This seems to be the moment when public discourse breaks down—we can’t agree on what to try.

Why does public discussion break-down at this step while the
scientific-engineering process moves forward?
Well, it would be revisionist history to say that scientific discovery progresses
smoothly. Science is performed by
people, after all, and the same conflicts that beset public discussions rankle
in science too. Germ theory, espoused by
Louis Pasteur, was roundly mocked when presented to the great minds of the 19^{th}
century.

What makes the difference, then? Well, in the case of germ theory, it made a difference that Pasteur was right. Germs do cause disease, and the hygiene protections that the theory suggests do work to reduce disease spread. So, the difference is that there can be hard measurements made and correctly interpreted to lead us to positive progress.

Thus, I arrive at my suggestion for the public discourse problem—hard measurements correctly interpreted. And perhaps a bit of patience. It took years for the germ theory to be accepted. These years were spent taking data, interpreting them or improving them, and seeing what it all added up to.

It may be a bit tough to be patient, given what’s at stake with some of the questions we are debating in the public sphere. One aspect that may help to accelerate the process of getting it right is to encourage widespread familiarity with the nature of measurements and with the uncertainty that is unavoidable in hard measurements. The more frankly we interrogate our data, assigning appropriate error limits, the more we can learn from them. We should not imagine that they are more definitive than they are—we may miss or misstate the truths that they contain. Widespread literacy on the topic of uncertainty in measurements would go a long way in improving public discourse on the topics that divide us.

]]>And there will be more examples of algorithms migrating to the inside of the body, and resulting in unique legal issues. Just consider what is happening now in the development of neuro-prosthesis, such as the development of an artificial hippocampus where memory resides. Clearly, challenging legal issues will arise from such developments. For example, will a person’s memories be susceptible to editing?

And will it be possible to transmit information (such as a commercial or political ad) directly into one’s brain via a wireless connection? This may sound like science fiction, but there seems to be a trend for technology equipped with algorithms to enter the body and to be directed at the brain, in fact, a lot of research is underway to do just that. And of course, the current law will be challenged by such developments. All these examples call for what I term- A Law of Algorithms – which I have been writing about recently. What I have tried to express in this brief essay is that technology and law do not operate independently, and in the case of algorithms, the law in many different areas will be challenged. However, by addressing concerns associated with algorithms from a perspective of law, we may be able to create a more just and equitable society, laudable goals.

]]>Disease surveillance can be conducted at a variety of levels, from tracking and tracing individual diagnosed cases to monitoring aggregate testing or case data as a measure of disease incidence in a specific or more general population. The former is a medical and epidemiological problem while the latter is both a statistical problem, in the sense of using samples of data taken over time to infer trends in a population, as well as an epidemiological problem if a potential increase is identified.

At its most fundamental, this is an exercise in separating signal from noise, where the goal is to quickly identify an increase in disease incidence (the signal) in the presence of data that will naturally fluctuate over time (the noise). The extremes will generally be clear: an individual who presents with obvious COVID-19 symptoms and tests positive, or a large outbreak with many individuals with symptoms and/or positive test results. Identifying more subtle changes in disease incidence, prior to and in order to head off a large outbreak, is more challenging.

The detection challenge is partly quantitative, where it can be quite difficult to identify a subtle signal amid noisy data. But the challenge is compounded by practical issues related to COVID-19 which a disease that can spread asymptomatically (or nearly so) and for which case outcomes may lag policy changes by weeks. It may be further compounded by differing organizational priorities, where the existence of subtle changes may be disputed and yet where it is critical to quickly identify increases in disease incidence before they become exponential.

Our focus here is on the statistical tools to address the quantitative problem, though note that proper implementation and transparent use of the right tool may also help address some of the other challenges. The fundamental idea is quite simple. Using historical data, the existing or desired disease incidence rate is quantitatively characterized, typically in terms of an average rate and some measure of variability such as the standard deviation. Then future data is monitored in comparison to the historical, where if it is significantly above the historical average, then that would trigger an epidemiological investigation into whether there is an event that warrants some sort of intervention and/or policy change.

The figure below illustrates one approach using COVID-19 case data from Knox County, Tennessee. It uses a moving average and standard deviation from the previous 14 days, less any unusual spikes in the data, to establish a warning threshold (the yellow line) and signal threshold (the red line) at 1.5 and 3 standard deviations above the moving average. In the figure, the yellow and red bars denote counts that have exceeded the warning threshold or the signal threshold, respectively, which is an indication that the count for that day was unusually high compared to observations from the previous two weeks.

The way this is particular surveillance algorithm is implemented is that each day the 14-day moving average and standard deviation is calculated from which new warning and signal thresholds are specified. Then the next day’s case count is compared to these thresholds and appropriate action is taken if the observed count exceeds one or both of them. For example, to the far right in the figure we see that the thresholds for May 24th have been calculated using the data from May 10-23 less the spike on May 11th. The question is where the observed count, when it is observed, falls in relation to that day’s thresholds. (See Illustrative Surveillance Example Using Knox County TN COVID-19 Case Data.xlsx for the Excel spreadsheet with the calculations.)

Note how the figure shows the background incidence fluctuating over time and this is reflected in threshold changes. Because this algorithm bases the decision thresholds on a 14-day average, it allows for this type of variation, though depending on the surveillance goals this may or may not be desired. For example, this algorithm will be very effective at detecting large increases quickly but it will be poor at detecting a slow steady increase in disease incidence. For that, there are other algorithms that are more effective.

Now, while the idea is simple, algorithmic options and implementation details add complexity. For example, the choice of threshold requires making a sensitivity-specificity tradeoff in terms of speed of detection of an increase in incidence versus the rate of false positive signals, a choice that has both practical and perhaps political implications. There are also algorithmic options, where certain algorithms are more appropriate for some types of data and, as we just mentioned, some algorithms are better suited to detect certain types of incidence changes than others. And, there are computational details as well as basic questions about which historical data to use to characterize the “normal” background disease incidence and how to update that information as time progresses.

Unfortunately, we don’t have the space here to address all of these issues. However, important implementation details aside, hopefully this short note has made it clear that the appropriate use of these types of surveillance tools can help frame the process by which decisions are made and help remove subjectivity and seat-of-the-pants decision making. Perhaps most importantly, from a public health viewpoint, appropriate implementation of a good surveillance system can improve data and decision making transparency and thereby increase confidence in the public health system.

**For Further Reading**

Fricker, R.D., Jr. (2013). Introduction to Statistical Methods for Biosurveillance, with an Emphasis on Syndromic Surveillance, Cambridge University Press.

Fricker, Jr., R.D., and S.E. Rigdon (2018). Disease Surveillance: Detecting and Tracking Outbreaks Using Statistics, Chance, **31**, 12-22.

Rigdon, S.E., and R.D. Fricker, Jr. (2019). Monitoring the Health of Populations by Tracking Disease Outbreaks and Epidemics: Saving Humanity from the Next Plague, CRC Press.

Why do we blame algorithms for our woes? Because they push us out of our comfort zone? No doubt. But also because we often agree to use them, not understanding what they really are and how they work. Our dreams and our fears are the consequences of this ignorance. We fear algorithms because we see them as mysterious beings, endowed with supernatural powers, perhaps evil intentions.

We often agree to use them, not understanding what they really are and how they work.

In the book, we clarify the opaque vocabulary often used in this context explaining the basics of this science for a general public. To free yourselves from any magical thinking, to separate legitimate hopes from childish fantasies, justified fears from unfounded anxieties, we invite you on a journey through the world of algorithms. We discuss the digital society and the new human in becoming, enlightening societal and philosophical issues such as the transformation of work, property, privacy… that are often explained confusingly in the media.

We explain how scientific knowledge is impacted by computer science across all fields with big data, machine learning… It is essential for you to get familiar with these notions to understand better the transformations of the world and acquire a more modern viewpoint.

The goal of *The Age of Algorithms* is to make you more aware of the environments you live in, to empower you with your own viewpoint and understanding instead of being frightened by the new technology. This, we believe, will help you improve your life in the digital world.

Algorithms can lead to the best or the worst outcome, but we must never forget that they do not, in themselves, have intention. Human beings have designed them. They are what we want them to be. That is also the message of the book.

]]>

**Figure 1: (a) Mixing of milk and coffee; (b) Cascading process in turbulence; (c) Energy flux Π_{u}(k)**

Let us understand the turbulent mixing of milk in coffee in some detail. The stirring by spoon creates mini cyclone, technically called *eddies*, of the size of the cup (L). This large eddy generates smaller eddies of size L/2, that in turn generates even smaller ones of size L/4, and so on (Figure. 1(b)). This process continues till the smallest eddy of size η, where the fluid energy is converted to heat.

It is convenient and customary to compute the energy transfer from one scale to the next in Fourier space. The wavenumbers are denoted by k, which is inverse of length; and the energy flux by Π_{u}(k). In the coffee example, since there in no energy injection in the intermediate range, the same energy flux flows down the inertial range where viscous dissipation is weak. Hence, Π_{u}(k) = const., or dΠ_{u}(k)/dk =0 (see Figure 1(b,c)). The corresponding energy spectrum is k^{-5/3}. This is Kolmogorov’s theory of turbulence.

Now imagine a strange experiment in which we mix polymers in coffee (not for drinking!). Polymers are like small springs, and they extract a fraction of kinetic energy flux, Π_{u}(k). Hence Π_{u}(k) decreases with k, or dΠ_{u}(k)/dk < 0. On the other hand, if we heat up coffee, then the thermal energy enhances Π_{u}(k), thus leading to dΠ_{u}(k)/dk > 0. We illustrate these two cases in the Figure 1(c). The above examples are illustrations of real-life flows—for example, Π_{u}(k)/dk < 0 in magnetohydrodynamic turbulence and in flows with polymers. But, Π_{u}(k)/dk > 0 in thermal convection and in shear flows. Thus, energy flux helps model these flows, as well as provide important inputs, e.g., turbulence drag reduction in polymeric flows and in magnetohydrodynamics.

In the monograph I describe basics of energy transfers and fluxes in turbulence, and then employ them to describe scaling and other properties of turbulence in hydrodynamics, magnetohydrodynamics, passive scale, buoyant flows, rotating flows, active scalars and vectors, compressible flows, etc. Many of the above ideas have been discovered by our group, hence the book brings out a unique perspective on these topics.

Cambridge University Press has produced an attractive book that, I hope, would be very useful to graduate students, scholars, and researchers.

**About the Author:**

**Mahendra K. Verma**, a leading researcher in the field of turbulence, holds Sanjay Mittal Chair at the Physics Department of Indian Institute of Technology Kanpur, India. He is a recipient of Swarnajayanti fellowship, INSA Teachers Award, and Dr APJ Abdul Kalam Cray HPC Award; and a fellow of INSA. In addition to this book, he has authored books “Introduction to Mechanics” and “Physics of Buoyant Flows: From Instabilities to Turbulence”. His other research interests include nonlinear dynamics, high-performance computing, and non-equilibrium statistical physics.

]]>

The teachers of these new students taught two subjects. One was the classical geometry of the Greeks, less useful than it might appear, but universally held up as a model of reasoning in which rigorous argument lead from clearly stated first principles to final conclusions. The other was modern mathematics, in particular calculus, which lacked any such clear structure.

Certainly modern mathematics dealt with numbers but numbers seemed to be drawn from a rag bag of different objects.

Certainly modern mathematics dealt with numbers but numbers seemed to be drawn from a rag bag of different objects. There were numbers like 1 and 2 which everybody understood, and fractions like 1/3 or 3/9 which were the same number and 2/3 and 1/4 which were different numbers. Then there was 0 which was a number like every number except that you were not allowed to divide by it. To these were adjoined the negative numbers which were every bit like positive numbers, provided that you remembered that the product of two negative numbers was a positive number. Negative numbers had no square root unless you allowed mysterious entities called complex numbers which were also exactly like other numbers except when they were not (for example complex numbers usually had three complex cube roots). Finally there were objects like π which were not fractions, or like Euler’s ϒ which may or may not be a fraction, but which everybody agreed were bona fide numbers to be treated exactly the same as other numbers.

It is doubtful if this worried many students then who, like most students now, wished just to pass their exams, get a good job and enjoy themselves. However it did worry some of their professors and during the course of the 19th century, with many fits and starts, they completed the difficult task of rigourising calculus and the linked task of providing a coherent account of the numbers used in calculus.

In 1930, Landau published a little book *Foundations of Analysis* setting out this account at undergraduate level. Landau’s much loved text is still in print but, as Landau says, is written `in merciless telegraph style . . . as befits such easy material’.

There is, I think, room for a more relaxed account which gives some idea of where the ideas come from and why they are used in the way they are used. My book is an attempt at such an account.

]]>Today’s students in Earth and environmental sciences face a transition with the increased use of new techniques and computer models to analyze, synthesize, and understand large spatial and temporal data sets. To use these models and techniques effectively requires at least a passing acquaintance with the mathematical concepts and methods that underlie them. However, many students are either intimidated by the subject or have not used the mathematics they know for many years and so find themselves in need of a gentle reminder.

I wanted to write a textbook that presents an unintimidating introduction to the basic mathematical techniques that students will likely encounter. The material in the book is based on three courses that I teach at the undergraduate and graduate levels, and which cover quantitative methods and basic oceanographic and climate modeling. These courses are designed with many opportunities for students to develop and test their understanding by working through problems. These vary from those that fill in the steps of a worked example to more complex problems requiring the students to formulate equations and develop an appropriate path to their solution. Although I teach mostly students from marine sciences, students from other disciplines such as atmospheric sciences, ecology, geology, and even economics have taken these courses. This has led me to develop examples and assignment problems that cover a range of disciplines in the Earth and environmental sciences, and many of these have found their way into the book.

A computer is now an essential scientific tool. Many mathematical methods have been implemented in a wide variety of programming languages and a series of Matlab and Python codes illustrating some of these have been written to supplement the material in the book.

I hope that this book gives the students and researchers the mathematical tools they need to better understand complicated data analysis techniques and the inner workings of computer models.

**Mathematical Methods in the Earth and Environmental Sciences**

by Adrian Burd, *University of Georgia*

]]>