The post The mean side of the force : How regression to the mean can fool us first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Regression to the mean is a powerful and common source of bias in interpreting data. Once understood, its potential to mislead is obvious. Yet many scientists are regularly fooled by it. In this blog I shall try explain it.

Figure 1 is a so-called *scatterplot* presenting data from a cross-over trial in asthma that I helped design many years ago and shows values of forced expiratory volume in one second (FEV_{1}) measured on two occasions. A cross-over trial is one in which patients are given different treatments on separate occasions. However, here the measurements represented were taken on each of the two occasions *before* the treatments was administered: they are so-called *baseline* measurements. The units of measurement are litres (L) and FEV_{1} is a measure of lung function, with higher values indicating better lung function. Since, as already explained, on both occasions the measurement were taken before administration of treatment, they may be taken to represent the natural untreated state of the patients studied. Each blue circle represents one of 150 patients and plots FEV_{1} on the second occasion (the vertical dimension) against FEV_{1} on the first occasion (the horizontal dimension). The red square represents the position of the ‘average’ patient. Two dashed lines are plotted at 1.5 L, one vertical and one horizontal. These are supposed to indicate a boundary between extremely poor values (less than 1.5L) and other (less poor) values for period 1 and period 2.

A solid diagonal line rising from bottom left to top right is the line of equality. If a blue dot lies on this line, the patient had the same reading on the second occasion as on the first. If the blue dot lies to the left and above this line, the reading was higher on the second occasion than on the first and if a dot lies to the right and below the line the reading was lower on the second occasion. Note that the red square lies almost exactly on the line, indicating that on average patients were no better on the second occasion than on the first and this merely reflects the fact that the blue dots are scattered on either side of the line of equality, with no obvious tendency to lie on one side or the other. Of course, there is a general pattern to the observations that reflects what statistician call a *positive correlation. *If a patient’s values were high in the first period, they will tend to be high in the second period but the relationship is far from perfect. In fact, the points tend to cluster around the line of equality reflecting not only that there is correlation but that the distribution is similar in the second period to the first.

As regards trends over time, however, the message seems clear: there is no obvious systematic ( as opposed to purely random) difference for values in the second period compared to the first. *However, if we select particular values for examination, we must be very careful not to fool ourselves*, as I shall now explain.

Figure 2 represents what we would see if we only decided to retain for treatment on a second occasion those patients whose first period values were extremely poor, that is to say less than 1.5 L. Of course, all 150 patients would have been measured on the first occasion but as it turns out, only 43 patients had values less than 1.5 L and so it is only for these patients that we would have values from the second period. The 43 pairs of values that would result are shown in the scatterplot.

The remaining unselected patients which, taking 43 from 150, are 107 in number, are represented by unfilled circles. Since, for these patients the second period values are not available, the first period values have been plotted against the horizontal axis. Also shown is the mean value for all 150 patients on both occasions represented by a red square. Of course, we do not have the mean for all 150 on the second occasion but as we saw from Figure 1, the mean of all 150 on the first occasion provided a very good estimate of the mean on the second.

If we now look at Figure 2, we can see the following. All 43 patients were classed as having FEV_{1} values that were extremely poor in the first period. However, six of them now have values that are no longer extremely poor, being equal to or above 1.5L. There were, however, no patients who moved from having values equal to or above 1.5L to having values below. Thus, taken as a whole, there seems to be some improvement over time.

A little thought, however, shows that the reason that this is so is that we collected the data in a way that made deterioration in this manner impossible. All patients with values above 1.5L were excluded by design. Had we had a time machine we could have decided to use the values that patients would have on the second occasion to only retain those whose second occasion values were extremely poor. In that case we would have seen the situation in Figure 3. This shows the reverse position to Figure 2. The patients who were only measured once, but this time in period 2 but not in period 1 (thanks to our time machine!), are again represented using open circles but now plotted against the vertical axis. There are now 8 points (two are difficult to count separately because almost identical) where patients had second period values that were below 1.5L but their first period values were not. We now have 8 patients who moved from having not so poor values to having extremely poor ones and none who moved the other way. Again, however, this fact is of no scientific relevance. It merely reflects the way the data have been cut.

Regression to the mean is a powerful potential source of bias. If units are selected for further study because their values were extreme when first measured, on average they will be less extreme when measured again. They may be expected to be somewhere between the original values and the overall mean of all (selected and unselected values). They have thus *regressed to the mean*. We typically do require that patients who enter a clinical trial have extreme values. For example when recruiting patients for a clinical trial in hypertension, we might require that they demonstrate a diastolic blood pressure (DBP) when measured that is at least 95 mmHg and reject those with lower values. If that is so, we can expect to see a spontaneous improvement thanks to regression to the mean.

How do we deal with this? By having a control group. If we select patients for a clinical trial based on their baseline reading but then randomly allocate them either to receive the intervention or a control treatment, then the regression to the mean effect should (apart from chance) be similar in the two groups. However, to eliminate the regression to the mean bias does require that we judge the effect of treatment by comparing the results at the end of the trial between the two groups and not by comparing the end result to the baseline.

The discovery of regression to the mean as a statistical phenomenon was due to the Victorian scientist Francis Galton (1822-1911). The story of how he came across it is treated in my book *Dicing with Death*

Title: *Dicing with Death*

Author: Stephen Senn

ISBN: 9781108999861

The post The mean side of the force : How regression to the mean can fool us first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Programming in Parallel with CUDA first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>NVIDIA’s launch of CUDA for GPU programming in 2007 broke Moore’s Law in the sense that suddenly computing power suddenly jumped by a factors of 100 not 2. Moreover, the C++ programming language was used to write the GPU code. Thus, there was really is very little new stuff to learn!

My book is designed for people who want to get more processing power out of their PC, workstation or large-scale computing system by using the power of modern GPUs. It will teach you how to use NDVIDA’s C/C++ CUDA programming language to get the most out your GPUs.

My book is by no means the first book on CUDA, but it is different – it has a much richer set of real-world examples than previous books. Also, I care deeply about style in computer code. Good code should be compact and clear avoiding “clever tricks” to optimise fragments of code. One revelation in developing the examples was just how good the CUDA NVCC compiler is at optimising code. We think our examples are elegant, efficient and uncluttered. For example, we use modern C++ container classes to hold arrays rather than relying of the older C style malloc based allocation still seen in most current tutorial CUDA code. We also use small set of utilities provided in “cx” header files.

Many of our examples are fully working programs and cover topics including

• Iterative solution of partial differential equations,

• Image manipulation including Richardson-Lucy image deblurring,

• Medical image registration,

• Simulation of a medical PET scanner,

• Reconstruction of PET images from listmode data using the MLEM method,

• The Ising model in solid state Physics, with a simple interactive visualisation,

• Use of CUDA textures, including our cx container class which simplifies their creation,

• Use of Tensor Core hardware.

While this code may be directly useful in some cases its main purpose is to provide models which readers can rapidly modify to solve their own problems.

We also include many other short examples to illustrate various features of CUDA and other parallel programming tools such as openMP and MPI.

I do not have space here to tell you everything about the book but you will be able to find all the examples at https://github.com/RichardAns/CUDA-Programs. I hope you enjoy my book.

The post Programming in Parallel with CUDA first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Geomathematics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Newton found something essential: the Earth attracts the apple (so far not very surprising), but the apple also attracts the Earth (admittedly essentially less). The fact that the apple is pulling quite weakly is due to another discovery by Newton: the gravitational force is proportional to the attracting mass. It is worth thinking that through further. If glaciers melt, then the corresponding part of the Earth, say Greenland, loses some of its mass. In comparison to the mass of the entire island and the segment of mantle and core underneath, it is a very small percentage loss. But there is a loss, and it must have, according to Sir Isaac, a consequence on the gravitational field. And, indeed, current technology, such as provided by the satellite missions GRACE and GRACE-FO, is capable of measuring deviations of the gravitational field over Greenland and elsewhere.

This is quite impressive, but it is not the end of the story, because we are heading in the wrong direction. We say: there is a change of mass, so there must be a change of the gravitational field and Newton gives us a formula to calculate the latter change. However, we are actually putting the cart before the horse here. The satellites yield information about the gravitational field at the orbit, and we want to calculate mass anomalies at the Earth’s surface out of that.

A successor on Newton’s Lucasian chair, Sir George Gabriel Stokes, noticed that the arising opposite question – the so-called inverse problem – is by far more difficult to answer and it does not have a unique solution. Science has progressed and today we know for example that mass changes which occur only at the surface (such as melting glaciers) can be uniquely calculated out of the gravitational field. However, if there is (as usual) noise on the data, we will most likely compute a result which is seriously away from reality. Only sophisticated tools – so-called regularization methods – are able to avoid this.

I use the aforementioned problem as an example where incredible measurement technology always needs a very important companion: sophisticated mathematical methods for the evaluation of the data. In geosciences, there is a long tradition of understanding mathematics as a ubiquitous part of research, going back as far as Ancient Greece. The scientific field where Earth sciences and mathematics merge is nowadays called *geomathematics*.

My book “Geomathematics – Modelling and Solving Mathematical Problems in Geodesy and Geophysics” provides mathematical foundations and tools for different applications in Earth sciences. It particularly focusses on global and regional modelling regarding gravitation, geomagnetics, and seismology. Though different mathematical theories and numerical methods are discussed for the three applications, there are many interconnections between the various problems.

The purpose of the book is, therefore, to provide a reference work for numerous mathematical theories which are fundamental in gravitational and magnetic field modelling, as well as seismology. It also shows some new tools e.g. for best-basis selection. Eventually, my hope is that it encourages more interdisciplinary research between mathematicians and Earth scientists.

The post Geomathematics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Medicine and statistics- not Montagues and Capulets first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The disquiet about statistics in medicine is understandable. Most of us physicians did not have a loving relationship with mathematics and many of us probably despise numbers, formulas and equations. However much we dislike the intrusion of statistics in medicine, we need to have some understanding of it to make sense of medical research. An appreciation of medical statistics has become even more relevant of late where we find more and more studies that employ large databases, the interpretation of which requires an ever-increasingly complex array of statistical tests.

Medicine is not an exact science and we largely depend on the balance of probabilities to make diagnosis and treatment decisions. We don’t always make the correct decisions. Some of the treatments work, others don’t and a few may even be downright dangerous. The only way to find out what works and what does not is to put them to the test. But there are so many factors, some known some unknown, that can come into play in real life and affect the results of a test that it may be difficult to differentiate between the apparent from the real. Therefore, we need to know whether the apparent difference in effect is genuine or observed purely by chance.

Simply knowing a treatment works is not enough. If we decide that the treatment genuinely works we would also be interested to know how effective the treatment is, or what is the size of the treatment effect. If we agree that the treatment effect is sizable we should also ensure that the treatment makes a meaningful difference in the outcomes that matter and not simply make the numbers look good. For example, it is not enough to see a reduction in systolic blood pressure or serum cholesterol we need to see a reduction in cardiovascular mortality and morbidity. The utility of statistics in medicine is that it allows us to make sense of the seemingly random numbers that are generated from research and reach those conclusions.

Admittedly, we do need complex statistical equations to help make up our minds. Does that mean we all need to learn these formulas? Some physicians will happily put on the hat of researchers but for most of us practising ones that may be a challenge too far. How do we then master the web of medical statistics? We don’t.

The widespread availability of statistical software has given rise to the temptation to grab hold of software, open the spreadsheet, press a button and hope for some magical answers. This is just as risky as taking hold of the steering wheel without knowing the car or the route. This could get really messy! I do not think everyone needs to learn statistical formulas or be able to perform the tests. It is enough for the practising physician to understand what statistical tests were performed, why they were performed and what were the caveats thereof.

I set out to write this book out of a desire to help trainee medics and allied health professionals get their teeth in the daunting field of medical statistics. Although there are plenty of books currently available in the market and many of these are written by highly qualified statisticians, the books have a heavy emphasis on teaching the mathematics of statistics, a non-starter for the non-mathematical mind.

‘Making sense of medical statistics’ is thus an aid to a journey through the maze of medical statistics, largely avoiding mathematics and formulas. The book intends to help teach the learner the essential concepts rather than the formulas. There is an emphasis on active learning, one will find brain-teasers on every page. To keep the attention of busy clinicians every section is kept short in length. The hard copy appears slim so as not to overwhelm the newby learner. The slim volume is also intended to make the book more accessible and easy to carry in the pocket. We utilised examples from the whole spectrum of the medical literature so that the learning was practical and relevant rather than mathematical and abstract.

The other useful feature of this book is the use of copious illustrations. I have found it easier to understand many concepts of statistics with pictures and thought the learner might appreciate the same. The pictures will also hopefully break the monotony of words of what is often a difficult topic to understand. Keeping in mind the differing learning needs of the reader nearly all chapters have been divided into the core and extended learning sections. Those interested in a very basic overview can flip through the core learning material but the more interested learner can engage with additional material in the extended learning section. Once one completes the printed material there is an equal amount of material online for the even more advanced learner. The book ends with a list of freely available statistical software and useful websites and learning material so that one can make independent progress in learning. I hope the book would prove useful for the beginner but those at a more advanced stage of learning may also come to appreciate the light reading interspersed with historical anecdotes from the world of medical statistics. I encourage readers to get back to me via mesdstatsfeedback@gmail.com. Your suggestions and criticisms are eagerly awaited!

The post Medicine and statistics- not Montagues and Capulets first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Author Shrawan Kumar’s Mathematical Journey first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>I earned my B.Sc. degree in 1973 from a small college aﬃliated to Gorakhpur University (in northeastern India) and then my M.Sc. degree in mathematics from Bombay University in 1975. Immediately afterwards, I joined the Tata Institute of Fundamental Research (TIFR), Bombay, to do my Ph.D. in mathematics which I obtained in 1986.

I held two postdoctoral positions: for one year (1983-84) at the Mathematical Sciences Research Institute, Berkeley, and for another year (1984-85) at MIT (as C.L.E. Moore Instructor). Then, I returned to TIFR as Fellow and then promoted to Reader. I moved to the University of North Carolina, Chapel Hill in 1991 as a Full Professor.

I have held short and long term visiting professor/scholar positions at various institutions including the Institute for Advanced Study, Princeton; MIT; The University of British Columbia; University of P. and M. Curie, Paris; Scoula Normale Superiore, Pisa; Ecole Normale Superieure, Paris; Max Planck Institut fur Mathematik, Bonn; ICTP, Trieste; Research Institute for Mathematical Sciences, Kyoto; Erwin Schrodinger International Institute for Mathematical Physics, Wien; Issac Newton Institute for Mathematical Sciences, Cambridge; Weizmann Institute, Israel; Hausdorﬀ Research Institute for Mathematics, Bonn; Institut Mittag-Leﬄer, Djursholm (Sweden); Duke University; University of Sydney.

In trying to build a theory it is very important to look at some examples and ‘test’ the questions one wants to ask. But, in the end, I am more interested in ‘general’ results. For example, I will not be satisﬁed to prove a result say for SL(n) (unless it is not true more generally) and I will try to prove it for general semisimple groups (which may sometimes require a diﬀerent modiﬁed formulation). For me, examples are stepping stones to a general theory. In the same vein, I am completely dissatisﬁed with a caseby-case proof of a general result. I like collaborations as is evident from my fairly long list of collaborators (36 so far). Collaborators bring diﬀerent expertise to bear on the problem at hand, which is often a great asset. But, it is important to have ‘right tuning’ with the collaborators.

It is very important to convey your results in a precise and clear way. When I am writing a paper or a book, I keep this principle in mind. I hope I have not been too unsuccessful. Some of the books by Milnor (e.g., his book with Stasheﬀ on ‘Characteristic Classes’), Rudin and Serre are my ideals.

The post Author Shrawan Kumar’s Mathematical Journey first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Algorithmic Randomness first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>In 1965, the mathematician A.A. Kolmogorov gave a precise definition of random finite strings in this computational vein, defining a string to be random if it cannot be compressed by any Turing machine (an abstract model of computation). An alternative definition of randomness for both finite strings and infinite sequences, based on computably presented statistical tests, was offered by Per Martin-Löf in 1966.

How do these two approaches relate to one another? First, in the case of finite strings, Martin-Löf showed that the strings that are statistically random in his sense are precisely the strings that are incompressible in Kolmogorov’s sense. In the case of infinite sequences, C.P. Schnorr and Leonid Levin independently proved in the early 1970s that, using a variant of Kolmogorov’s definition of randomness, a sequence is random in Martin-Löf’s sense exactly when all of its finite initial segments are incompressible. Schnorr also proved the equivalence of these notions with a third notion in terms of unpredictability using effective betting strategies.

While research in this area has continued since the notion of algorithmic randomness was formalized, there has been a flurry of activity beginning in the early 2000s. This research originally focused on the relationship between randomness and classical computability theory: How computationally powerful can a random sequence be, and how does randomness interact with other computability-theoretic concepts? More recently, the focus has expanded to, for instance, the different ways randomness can be relativized to an oracle or formulated in terms of different probability measures.

Researchers in algorithmic randomness have also begun to consider the relationship between analysis and randomness. Almost all sequences are random, and many theorems in analysis hold for almost all real numbers: Can we say that a certain kind of function is differentiable at exactly the random points, or that a certain kind of function’s Fourier series converges on exactly the random points? These types of questions have also been fruitfully investigated, revealing that different notions of randomness capture different kinds of typical behavior in analysis as well as other areas of classical mathematics.

Another recent avenue of investigation is the definition of randomness in “higher” and “lower” contexts. What would it mean to define randomness in the context of effective descriptive set theory, where we can use sets given by higher-order definitions, or in the context of computational complexity theory, where we limit ourselves by imposing resource bounds on the computations used to detect randomness? Drawing on tools in both of these contexts has greatly enriched the study of randomness.

Much of this recent work is surveyed in our edited collection *Algorithmic Randomness: Progress and Prospects*. We hope it provides not only an introduction to algorithmic randomness in general but also a sense of the current work in the field and potential future research directions.

The post Algorithmic Randomness first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Anachronism(s) in the history of mathematics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Of all scientific disciplines, mathematics is the one that displays the most enduring elements of continuity through ages and cultures. So much so that the German mathematician Hermann Hankel could, and not without reason, write: ‘In most sciences one generation tears down what another has built, and what one has established another undoes. In mathematics alone each generation builds a new storey to the old structure’. This often-quoted statement implies that as historians of mathematics we can translate past mathematical texts into contemporary language with a degree of success and scope unknown to historians of, say, medicine or chemistry. It would be unjustified to deny the historian mathematics such a possibility of translation, of familiarity with past texts: after all such possibility and familiarity are historical facts. However, we recognize that the greatest masters in history, also in the history of mathematics, have achieved more convincing interpretations exactly because they taught us how to ‘see the differences’ between past and present. Christine Proust, one of the great experts in the field, puts it beautifully:

The mathematics of Mesopotamia is the most ancient which has been transmitted to us. These texts, written on clay tablets in cuneiform symbols, deal with mathematical objects familiar to us, such as numbers, units of measurement, areas, volumes, arithmetical operations, linear and quadratic problems, or algorithms. However, when we look more closely, these familiar objects, reveal strange features on the clay tablets.

Another eminent of mathematics Henk Bos similarly states:

*Recognition makes it possible to distinguish historical events and thus initiates the link of past to present. If recognition or affinity is absent, earlier events can hardly, if at all, be historically described. Wonder, on the other hand, is indispensable too. The unexpected, the essentially different nature of occurrences in the past excites the interest and raises the expectation that something can be discovered and learned. History studied without wonder reduces itself to a mere listing of recognizable past events, which differ from what is familiar only by having another date.*

Anachronism, indeed, comes in several versions, some vicious, other virtuous. How can we strike a balance between recognition and wonder, between a study of the similarities of the past with the present and a realization that the past is alien from the present, that is a ‘foreign country’, as Lowenthal puts it? The authors of this book try an answer to these questions by adopting a bottom-up approach, which is to say by offering the reader a rich palette of historical cases, taken from European and non-European (Chinese and Indian) history.

References

- Bos, Henk J.M. (1989). Recognition and wonder: Huygens, tractional motion
- and some thoughts on history of mathematics. T
*ractrix*,*Yearbook for the History of Science, Medicine, Technology and Mathematics*, 1, 3–20. - Hankel, Hermann (1869).
*Die Entwickelung der Mathematik in den letzten Jahrhunderten. Antrittsvorlesungen*. Tübingen: Fues’sche Sortimentsbuchhandlung. - Lowenthal, David (2015).
*The Past is a Foreign Country*, second edition. Cambridge: Cambridge University Press. - Proust, Christine (2015). Mathématiques en Mésopotamie: étranges ou familières? In:
*Pluralités Culturelles et Universalité des Mathématiques: Enjeux et Perspectives pour leur Enseignement et leur Apprentissage – Actes du Colloque EMF2015 Plénières*, L. Theis (ed). Alger:Université des Sciences et de la Technologie Houari Boumediene, Société Mathèmatique d’Algérie, 17–39.

The post Anachronism(s) in the history of mathematics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post What have Mathematics and Statistics ever done for you? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Senior Marketing Executive, Cambridge University Press

How much do you know about the influence of mathematics and statistics? April is Mathematics and Statistics Awareness Month, so we thought we would share a quick snapshot…

You probably know that secure online shopping and private messaging on your mobile or cell phone would not be possible without something called public key cryptography. But did you know it was based on a branch of mathematics called number theory? Film streaming and online gaming would be impossible without communications theory and signal processing, which employs an area of mathematics called combinatorics.

Meanwhile, the ongoing COVID-19 pandemic has made many of us sadly familiar with the statistical tool called the R number. On the same theme, an equation called Bayes’ Rule can be used to work out the accuracy of COVID test results.

Then there is the discipline called operations research (OR) – sometimes called management research. Essentially, it’s the science of making things work smoothly. It uses a combination of mathematical modelling, optimization and statistics alongside disciplines like organization studies and psychology to address logistical challenges such as the surprisingly complex problem of managing elevator usage.

**Solutions for real-world
problems**

Mathematics is essential in answering many complex questions we find in the real world. For instance, we rely on mathematics to model the Earth’s climate. Mathematics and statistics are used to study many aspects of the natural world, such as in life science and in topics like geophysics. Plus, let’s not forget epidemiology, which uses statistical tools, such as the R number mentioned above, to model the spread of diseases.

**Going with the flow**

Understanding the ways fluids behave in different situations is crucial to many applications in engineering, chemistry, physics and biology. For instance, it is our understanding of fluid dynamics that lets us build planes that fly, and create hydraulic brakes that stop cars. It even helps us understand how the human heart works.

**Know when to fold ‘em**

Who would have thought that the seemingly obscure mathematics of folding in origami has applications in engineering, biochemistry (protein folding) and aeronautics (unfolding solar panels in space)? All this from an area of mathematics that might have seemed, at first, to have little value beyond academic interest.

**Machine learning turning
fiction into fact**

Recently, in a spooky development that could have come straight from the Harry Potter movies, machine learning techniques (with mathematics at their heart) have made it possible to animate photographs and make them ‘come to life’ – a bit like those grumpy paintings at Hogwarts. Another application is the increasing power of online tools such as Google Translate, which uses a technique known as natural language processing to give almost instant language translation. Granted, it’s not always perfect, but less than a generation ago, this would all have seemed like science fiction.

**Economic and Finance models**

Economists, businesses and financial
organisations like insurance companies use mathematics and statistics
to carry out data analysis, build financial models (such as for financial
markets) and support decision
making. One of the tools used particularly in economics is game
theory, which is a slick mathematical method for complex decision making. It’s
worth noting that the 2020 Nobel prize for Economics was awarded to researchers
in game theory, 26 years after John Nash was also awarded the prize for his
work on game theory, as dramatized in the film, *A Beautiful Mind*.

**Psychology and social
science**

Statistics is essential to psychology research for a number of reasons, not least because it lets researchers assess the significance of the results obtained from experiments that often involve many participants. Without the tools of statistics it would be very difficult to see patterns in such large amounts of data. And it’s not just psychology. Every other social science, such as sociology, relies on statistics to make sense of experiments. If you are dealing with so-called big data (on the worldwide web for example) then you can also employ machine learning and pattern recognition techniques that are – you guessed it – based on mathematics and statistics.

**Here, there and
everywhere**

The influence of mathematics and statistics can be found almost everywhere you look, from the online translation tools of Google, to the design of airplanes, from climate models and weather prediction to solar panels on satellites and the smooth running of elevators. Mathematics and statistics are essential to the modern world, and to understanding everything in it.

**Find out more**

New to mathematics and want to learn more? Take a look at *Quantitative Reasoning*, a book that
helps readers think mathematically about real-world questions.

**Related Content from
Cambridge University Press**

Communications and Signal Processing

Multimedia Fluid Mechanics Online (an undergraduate teaching tool)

Origametry – the Mathematics of Paper Folding

Mathematics and Statistics for Economics

The post What have Mathematics and Statistics ever done for you? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Bounded gaps between primes: the epic breakthroughs of the early 21st century first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The book sets out the mathematical content of the breakthroughs, with all of the details but not those of the work based on Deligne’s solution to the Weil conjectures. Those would be for a different book, maybe one on the Bombieri-Vinogradov theorem and its extensions and applications. For the expert, striving to improve the best bound 246, most of this material will be familiar. However the main target audience is beginning researchers, for example graduate students. I have vivid memories of my time at Columbia having to scrap with other grad students for important books held behind the library desk. One could have these only for one hour at a time, completely insufficient for understanding a major proof. To assist this group of potential readers the appendices contain proofs for supporting mathematics such as the spectral theorem for compact operators, Weil’s inequality for curves modulo primes, Bessel functions, the Shiu-Braun-Titchmarsh estimate etc. I have tried to simplify this material down to only what is essential for the work in the chapters, and these have been simplified down to only what is essential for the breakthroughs. But it’s certainly not simple!

Along the way there appeared to be many ways in which the results could be improved. However I did not tarry since after starting, the worst outcome would be for the work not to be completed. Having completed the work, others it is hoped will find paths to take it forward, with or without the text. For this writer, there are other pressing tasks and the Erdos life time limit is not so far off.

What the book is not: it’s not an account of the breakthroughs as a human endeavour. That would be a different book. There is the odd comment here and there which would qualify and some highly abbreviated biographical paragraphs. It is this author’s hope that such a book will be written, and soon before the individual and collective memory of events fades. To this end, on the book’s web page there is a link to the “backstory”, a web page containing an annotated series of time-lines and links to sources which might inspire someone to write up the human story with an absolute minimum of mathematical detail. Because what happened and especially the way it happened is unique, I would say in the entire history of mathematics, an account of the human side of the developments, in the hands of someone with suitable skills and experience, would be of interest I believe to a very wide audience.

As usual mathematical arguments are often difficult to follow and I needed help. This was generously provided especially by Pat Gallagher, Dan Goldston, Yoichi Motohashi and Terry Tao. I was not able to obtain a reply from Yitang Zhang, in spite of repeated requests, other to be sent his image. In the end I did not include more than a summary account of the proof of his extension to Bombieri-Vinogradov’s theorem – a full report of his proof, or better that of Polymath8a, would be part of the other potential book mentioned before. In any case, Maynard, Tao and Polymath8b went so much further than Zhang with their multidivisor/multidimensional method, an approach which seems both accessible and able to be improved.

Which brings me to my final remark: where to next in the bounded gaps saga? As hinted before, the structure of narrow admissible tuples related to the structure of multiple divisors of Maynard/Tao, and variations of the perturbation structure of Polymath8b, and of the polynomial basis used in the optimization step, could assist progress to the next target. Based on “jumping champions” results, this should be 210. But who knows!

The post Bounded gaps between primes: the epic breakthroughs of the early 21st century first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Did an apple really fall on Newton’s head? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Everyone knows that *The* *Principia* was based on the inspiration that struck Newton when the apple struck his head, as you can see from the cartoon above. The thought that passed through his head was as follows:

“Clearly the earth attracts the apple in the same way that it attracts the moon, and the force very likely obeys the inverse square law. I can check this by calculating the acceleration of the moon towards the earth, as determined by its orbit and the length of a synodic month.

“But the moon does not orbit about the centre of the earth but about the barycentre of the earth-moon system. To calculate this I need to know the mass of the moon. What difference would it make if the moon became twice as dense? The tides would become stronger. I can compare the strength of the lunar and solar tides, and hence compare the density of the moon with the density of the sun, and I can compare the density of the sun with the density of the earth as they both have satellites. Now I need pencil and paper…”

Newton’s *anni mirabiles* were 1665-1667, when he was twenty-two to twenty-four years old. In 1666 the University of Cambridge went into lockdown because of the plague, and he retreated to his home base in Lincolnshire to think. This is when his theory of gravity, and so much more, was developed, and the semi-mythical apple fell from the tree.

The current Covid crisis has killed more people than the plague of 1666, which none the less is thought to have killed a quarter of the population of London. The current lockdown leaves academics with vast electronic resources, whereas in Newton’s day there was nothing to do but to think. Is there a lesson to be learned?

He was an outstanding mathematician, physicist, astronomer, historian, theologian, and (as master of the mint) civil servant. But Voltaire, who thought so highly of him, and was the lover of the great Émilie du Châtelet, the only translator of *The Principia* into French, asserted that he was so famous in England because he had a beautiful niece. Princes of Italy came to England in order to set eyes on him, and, for all I know, on his niece.

Reverting to the apple/moon comparison, a great variety of simple ideas come into play, and I intend to concentrate on simple ideas, rather than on the technical details. For now, I ask you some simple questions concerning the tides.

- The sun attracts the earth far more strongly than does the moon. The earth rotates about the sun, not about the moon. So why does the moon cause greater tides on the earth than does the sun?
- As the earth spins on its axis, the moon reaches its highest point in the sky, when it attracts the sea most strongly, approximately once every 24 hours. But we get a high tide approximately once every 12 hours. Why is this?
- Why only approximately every 12 hours?
- Some high tides are higher than others. What other factors may contribute to these discrepancies? Ignoring the weather, you are doing well if you can think of five.

If you are new to these ideas you are in much the same position as Newton, who (it seems) never saw the sea, but sat in his garden thinking.

I should perhaps mention that I am an emeritus professor of pure mathematics at Queen Mary, University of London. I work in algebra, and have had to learn much in order to understand the great breadth of Newton’s masterpiece.

I hope that my understanding has been sufficient for the purposes of the task I have undertaken. I was moved to produce my translation by my feeling that the Cohen-Whitman translation was too opaque, and based on an inadequate understanding of the text. I detail my attitude to their work in the preface to my translation, and acknowledge the help I have received from many people.

I should repeat here my gratitude to Carl Murray, who persuaded me to have my translation published and produced the diagrams, to Wolfram Neutsch, who read the entire manuscript, and saved me from some embarrassing errors, to Niccolò Guicciardini for much learned assistance, and to David Tranah of Cambridge University Press, who fortunately insisted on setting the translation in a bright modern style. I am grateful for his hard work and professionalism which also saved me from a number of errors.

The online annotated translation of *The Principia* (www.17centurymaths.com) by Ian Bruce unfortunately did not come to my attention until my translation was in the hands of C.U.P.

The post Did an apple really fall on Newton’s head? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>