Newton found something essential: the Earth attracts the apple (so far not very surprising), but the apple also attracts the Earth (admittedly essentially less). The fact that the apple is pulling quite weakly is due to another discovery by Newton: the gravitational force is proportional to the attracting mass. It is worth thinking that through further. If glaciers melt, then the corresponding part of the Earth, say Greenland, loses some of its mass. In comparison to the mass of the entire island and the segment of mantle and core underneath, it is a very small percentage loss. But there is a loss, and it must have, according to Sir Isaac, a consequence on the gravitational field. And, indeed, current technology, such as provided by the satellite missions GRACE and GRACE-FO, is capable of measuring deviations of the gravitational field over Greenland and elsewhere.

This is quite impressive, but it is not the end of the story, because we are heading in the wrong direction. We say: there is a change of mass, so there must be a change of the gravitational field and Newton gives us a formula to calculate the latter change. However, we are actually putting the cart before the horse here. The satellites yield information about the gravitational field at the orbit, and we want to calculate mass anomalies at the Earth’s surface out of that.

A successor on Newton’s Lucasian chair, Sir George Gabriel Stokes, noticed that the arising opposite question – the so-called inverse problem – is by far more difficult to answer and it does not have a unique solution. Science has progressed and today we know for example that mass changes which occur only at the surface (such as melting glaciers) can be uniquely calculated out of the gravitational field. However, if there is (as usual) noise on the data, we will most likely compute a result which is seriously away from reality. Only sophisticated tools – so-called regularization methods – are able to avoid this.

I use the aforementioned problem as an example where incredible measurement technology always needs a very important companion: sophisticated mathematical methods for the evaluation of the data. In geosciences, there is a long tradition of understanding mathematics as a ubiquitous part of research, going back as far as Ancient Greece. The scientific field where Earth sciences and mathematics merge is nowadays called *geomathematics*.

My book “Geomathematics – Modelling and Solving Mathematical Problems in Geodesy and Geophysics” provides mathematical foundations and tools for different applications in Earth sciences. It particularly focusses on global and regional modelling regarding gravitation, geomagnetics, and seismology. Though different mathematical theories and numerical methods are discussed for the three applications, there are many interconnections between the various problems.

The purpose of the book is, therefore, to provide a reference work for numerous mathematical theories which are fundamental in gravitational and magnetic field modelling, as well as seismology. It also shows some new tools e.g. for best-basis selection. Eventually, my hope is that it encourages more interdisciplinary research between mathematicians and Earth scientists.

]]>

The disquiet about statistics in medicine is understandable. Most of us physicians did not have a loving relationship with mathematics and many of us probably despise numbers, formulas and equations. However much we dislike the intrusion of statistics in medicine, we need to have some understanding of it to make sense of medical research. An appreciation of medical statistics has become even more relevant of late where we find more and more studies that employ large databases, the interpretation of which requires an ever-increasingly complex array of statistical tests.

Medicine is not an exact science and we largely depend on the balance of probabilities to make diagnosis and treatment decisions. We don’t always make the correct decisions. Some of the treatments work, others don’t and a few may even be downright dangerous. The only way to find out what works and what does not is to put them to the test. But there are so many factors, some known some unknown, that can come into play in real life and affect the results of a test that it may be difficult to differentiate between the apparent from the real. Therefore, we need to know whether the apparent difference in effect is genuine or observed purely by chance.

Simply knowing a treatment works is not enough. If we decide that the treatment genuinely works we would also be interested to know how effective the treatment is, or what is the size of the treatment effect. If we agree that the treatment effect is sizable we should also ensure that the treatment makes a meaningful difference in the outcomes that matter and not simply make the numbers look good. For example, it is not enough to see a reduction in systolic blood pressure or serum cholesterol we need to see a reduction in cardiovascular mortality and morbidity. The utility of statistics in medicine is that it allows us to make sense of the seemingly random numbers that are generated from research and reach those conclusions.

Admittedly, we do need complex statistical equations to help make up our minds. Does that mean we all need to learn these formulas? Some physicians will happily put on the hat of researchers but for most of us practising ones that may be a challenge too far. How do we then master the web of medical statistics? We don’t.

The widespread availability of statistical software has given rise to the temptation to grab hold of software, open the spreadsheet, press a button and hope for some magical answers. This is just as risky as taking hold of the steering wheel without knowing the car or the route. This could get really messy! I do not think everyone needs to learn statistical formulas or be able to perform the tests. It is enough for the practising physician to understand what statistical tests were performed, why they were performed and what were the caveats thereof.

I set out to write this book out of a desire to help trainee medics and allied health professionals get their teeth in the daunting field of medical statistics. Although there are plenty of books currently available in the market and many of these are written by highly qualified statisticians, the books have a heavy emphasis on teaching the mathematics of statistics, a non-starter for the non-mathematical mind.

‘Making sense of medical statistics’ is thus an aid to a journey through the maze of medical statistics, largely avoiding mathematics and formulas. The book intends to help teach the learner the essential concepts rather than the formulas. There is an emphasis on active learning, one will find brain-teasers on every page. To keep the attention of busy clinicians every section is kept short in length. The hard copy appears slim so as not to overwhelm the newby learner. The slim volume is also intended to make the book more accessible and easy to carry in the pocket. We utilised examples from the whole spectrum of the medical literature so that the learning was practical and relevant rather than mathematical and abstract.

The other useful feature of this book is the use of copious illustrations. I have found it easier to understand many concepts of statistics with pictures and thought the learner might appreciate the same. The pictures will also hopefully break the monotony of words of what is often a difficult topic to understand. Keeping in mind the differing learning needs of the reader nearly all chapters have been divided into the core and extended learning sections. Those interested in a very basic overview can flip through the core learning material but the more interested learner can engage with additional material in the extended learning section. Once one completes the printed material there is an equal amount of material online for the even more advanced learner. The book ends with a list of freely available statistical software and useful websites and learning material so that one can make independent progress in learning. I hope the book would prove useful for the beginner but those at a more advanced stage of learning may also come to appreciate the light reading interspersed with historical anecdotes from the world of medical statistics. I encourage readers to get back to me via mesdstatsfeedback@gmail.com. Your suggestions and criticisms are eagerly awaited!

]]>I earned my B.Sc. degree in 1973 from a small college aﬃliated to Gorakhpur University (in northeastern India) and then my M.Sc. degree in mathematics from Bombay University in 1975. Immediately afterwards, I joined the Tata Institute of Fundamental Research (TIFR), Bombay, to do my Ph.D. in mathematics which I obtained in 1986.

I held two postdoctoral positions: for one year (1983-84) at the Mathematical Sciences Research Institute, Berkeley, and for another year (1984-85) at MIT (as C.L.E. Moore Instructor). Then, I returned to TIFR as Fellow and then promoted to Reader. I moved to the University of North Carolina, Chapel Hill in 1991 as a Full Professor.

I have held short and long term visiting professor/scholar positions at various institutions including the Institute for Advanced Study, Princeton; MIT; The University of British Columbia; University of P. and M. Curie, Paris; Scoula Normale Superiore, Pisa; Ecole Normale Superieure, Paris; Max Planck Institut fur Mathematik, Bonn; ICTP, Trieste; Research Institute for Mathematical Sciences, Kyoto; Erwin Schrodinger International Institute for Mathematical Physics, Wien; Issac Newton Institute for Mathematical Sciences, Cambridge; Weizmann Institute, Israel; Hausdorﬀ Research Institute for Mathematics, Bonn; Institut Mittag-Leﬄer, Djursholm (Sweden); Duke University; University of Sydney.

In trying to build a theory it is very important to look at some examples and ‘test’ the questions one wants to ask. But, in the end, I am more interested in ‘general’ results. For example, I will not be satisﬁed to prove a result say for SL(n) (unless it is not true more generally) and I will try to prove it for general semisimple groups (which may sometimes require a diﬀerent modiﬁed formulation). For me, examples are stepping stones to a general theory. In the same vein, I am completely dissatisﬁed with a caseby-case proof of a general result. I like collaborations as is evident from my fairly long list of collaborators (36 so far). Collaborators bring diﬀerent expertise to bear on the problem at hand, which is often a great asset. But, it is important to have ‘right tuning’ with the collaborators.

It is very important to convey your results in a precise and clear way. When I am writing a paper or a book, I keep this principle in mind. I hope I have not been too unsuccessful. Some of the books by Milnor (e.g., his book with Stasheﬀ on ‘Characteristic Classes’), Rudin and Serre are my ideals.

]]>In 1965, the mathematician A.A. Kolmogorov gave a precise definition of random finite strings in this computational vein, defining a string to be random if it cannot be compressed by any Turing machine (an abstract model of computation). An alternative definition of randomness for both finite strings and infinite sequences, based on computably presented statistical tests, was offered by Per Martin-Löf in 1966.

How do these two approaches relate to one another? First, in the case of finite strings, Martin-Löf showed that the strings that are statistically random in his sense are precisely the strings that are incompressible in Kolmogorov’s sense. In the case of infinite sequences, C.P. Schnorr and Leonid Levin independently proved in the early 1970s that, using a variant of Kolmogorov’s definition of randomness, a sequence is random in Martin-Löf’s sense exactly when all of its finite initial segments are incompressible. Schnorr also proved the equivalence of these notions with a third notion in terms of unpredictability using effective betting strategies.

While research in this area has continued since the notion of algorithmic randomness was formalized, there has been a flurry of activity beginning in the early 2000s. This research originally focused on the relationship between randomness and classical computability theory: How computationally powerful can a random sequence be, and how does randomness interact with other computability-theoretic concepts? More recently, the focus has expanded to, for instance, the different ways randomness can be relativized to an oracle or formulated in terms of different probability measures.

Researchers in algorithmic randomness have also begun to consider the relationship between analysis and randomness. Almost all sequences are random, and many theorems in analysis hold for almost all real numbers: Can we say that a certain kind of function is differentiable at exactly the random points, or that a certain kind of function’s Fourier series converges on exactly the random points? These types of questions have also been fruitfully investigated, revealing that different notions of randomness capture different kinds of typical behavior in analysis as well as other areas of classical mathematics.

Another recent avenue of investigation is the definition of randomness in “higher” and “lower” contexts. What would it mean to define randomness in the context of effective descriptive set theory, where we can use sets given by higher-order definitions, or in the context of computational complexity theory, where we limit ourselves by imposing resource bounds on the computations used to detect randomness? Drawing on tools in both of these contexts has greatly enriched the study of randomness.

Much of this recent work is surveyed in our edited collection *Algorithmic Randomness: Progress and Prospects*. We hope it provides not only an introduction to algorithmic randomness in general but also a sense of the current work in the field and potential future research directions.

Of all scientific disciplines, mathematics is the one that displays the most enduring elements of continuity through ages and cultures. So much so that the German mathematician Hermann Hankel could, and not without reason, write: ‘In most sciences one generation tears down what another has built, and what one has established another undoes. In mathematics alone each generation builds a new storey to the old structure’. This often-quoted statement implies that as historians of mathematics we can translate past mathematical texts into contemporary language with a degree of success and scope unknown to historians of, say, medicine or chemistry. It would be unjustified to deny the historian mathematics such a possibility of translation, of familiarity with past texts: after all such possibility and familiarity are historical facts. However, we recognize that the greatest masters in history, also in the history of mathematics, have achieved more convincing interpretations exactly because they taught us how to ‘see the differences’ between past and present. Christine Proust, one of the great experts in the field, puts it beautifully:

The mathematics of Mesopotamia is the most ancient which has been transmitted to us. These texts, written on clay tablets in cuneiform symbols, deal with mathematical objects familiar to us, such as numbers, units of measurement, areas, volumes, arithmetical operations, linear and quadratic problems, or algorithms. However, when we look more closely, these familiar objects, reveal strange features on the clay tablets.

Another eminent of mathematics Henk Bos similarly states:

*Recognition makes it possible to distinguish historical events and thus initiates the link of past to present. If recognition or affinity is absent, earlier events can hardly, if at all, be historically described. Wonder, on the other hand, is indispensable too. The unexpected, the essentially different nature of occurrences in the past excites the interest and raises the expectation that something can be discovered and learned. History studied without wonder reduces itself to a mere listing of recognizable past events, which differ from what is familiar only by having another date.*

Anachronism, indeed, comes in several versions, some vicious, other virtuous. How can we strike a balance between recognition and wonder, between a study of the similarities of the past with the present and a realization that the past is alien from the present, that is a ‘foreign country’, as Lowenthal puts it? The authors of this book try an answer to these questions by adopting a bottom-up approach, which is to say by offering the reader a rich palette of historical cases, taken from European and non-European (Chinese and Indian) history.

References

- Bos, Henk J.M. (1989). Recognition and wonder: Huygens, tractional motion
- and some thoughts on history of mathematics. T
*ractrix*,*Yearbook for the History of Science, Medicine, Technology and Mathematics*, 1, 3–20. - Hankel, Hermann (1869).
*Die Entwickelung der Mathematik in den letzten Jahrhunderten. Antrittsvorlesungen*. Tübingen: Fues’sche Sortimentsbuchhandlung. - Lowenthal, David (2015).
*The Past is a Foreign Country*, second edition. Cambridge: Cambridge University Press. - Proust, Christine (2015). Mathématiques en Mésopotamie: étranges ou familières? In:
*Pluralités Culturelles et Universalité des Mathématiques: Enjeux et Perspectives pour leur Enseignement et leur Apprentissage – Actes du Colloque EMF2015 Plénières*, L. Theis (ed). Alger:Université des Sciences et de la Technologie Houari Boumediene, Société Mathèmatique d’Algérie, 17–39.

Senior Marketing Executive, Cambridge University Press

How much do you know about the influence of mathematics and statistics? April is Mathematics and Statistics Awareness Month, so we thought we would share a quick snapshot…

You probably know that secure online shopping and private messaging on your mobile or cell phone would not be possible without something called public key cryptography. But did you know it was based on a branch of mathematics called number theory? Film streaming and online gaming would be impossible without communications theory and signal processing, which employs an area of mathematics called combinatorics.

Meanwhile, the ongoing COVID-19 pandemic has made many of us sadly familiar with the statistical tool called the R number. On the same theme, an equation called Bayes’ Rule can be used to work out the accuracy of COVID test results.

Then there is the discipline called operations research (OR) – sometimes called management research. Essentially, it’s the science of making things work smoothly. It uses a combination of mathematical modelling, optimization and statistics alongside disciplines like organization studies and psychology to address logistical challenges such as the surprisingly complex problem of managing elevator usage.

**Solutions for real-world
problems**

Mathematics is essential in answering many complex questions we find in the real world. For instance, we rely on mathematics to model the Earth’s climate. Mathematics and statistics are used to study many aspects of the natural world, such as in life science and in topics like geophysics. Plus, let’s not forget epidemiology, which uses statistical tools, such as the R number mentioned above, to model the spread of diseases.

**Going with the flow**

Understanding the ways fluids behave in different situations is crucial to many applications in engineering, chemistry, physics and biology. For instance, it is our understanding of fluid dynamics that lets us build planes that fly, and create hydraulic brakes that stop cars. It even helps us understand how the human heart works.

**Know when to fold ‘em**

Who would have thought that the seemingly obscure mathematics of folding in origami has applications in engineering, biochemistry (protein folding) and aeronautics (unfolding solar panels in space)? All this from an area of mathematics that might have seemed, at first, to have little value beyond academic interest.

**Machine learning turning
fiction into fact**

Recently, in a spooky development that could have come straight from the Harry Potter movies, machine learning techniques (with mathematics at their heart) have made it possible to animate photographs and make them ‘come to life’ – a bit like those grumpy paintings at Hogwarts. Another application is the increasing power of online tools such as Google Translate, which uses a technique known as natural language processing to give almost instant language translation. Granted, it’s not always perfect, but less than a generation ago, this would all have seemed like science fiction.

**Economic and Finance models**

Economists, businesses and financial
organisations like insurance companies use mathematics and statistics
to carry out data analysis, build financial models (such as for financial
markets) and support decision
making. One of the tools used particularly in economics is game
theory, which is a slick mathematical method for complex decision making. It’s
worth noting that the 2020 Nobel prize for Economics was awarded to researchers
in game theory, 26 years after John Nash was also awarded the prize for his
work on game theory, as dramatized in the film, *A Beautiful Mind*.

**Psychology and social
science**

Statistics is essential to psychology research for a number of reasons, not least because it lets researchers assess the significance of the results obtained from experiments that often involve many participants. Without the tools of statistics it would be very difficult to see patterns in such large amounts of data. And it’s not just psychology. Every other social science, such as sociology, relies on statistics to make sense of experiments. If you are dealing with so-called big data (on the worldwide web for example) then you can also employ machine learning and pattern recognition techniques that are – you guessed it – based on mathematics and statistics.

**Here, there and
everywhere**

The influence of mathematics and statistics can be found almost everywhere you look, from the online translation tools of Google, to the design of airplanes, from climate models and weather prediction to solar panels on satellites and the smooth running of elevators. Mathematics and statistics are essential to the modern world, and to understanding everything in it.

**Find out more**

New to mathematics and want to learn more? Take a look at *Quantitative Reasoning*, a book that
helps readers think mathematically about real-world questions.

**Related Content from
Cambridge University Press**

Communications and Signal Processing

Multimedia Fluid Mechanics Online (an undergraduate teaching tool)

Origametry – the Mathematics of Paper Folding

]]>The book sets out the mathematical content of the breakthroughs, with all of the details but not those of the work based on Deligne’s solution to the Weil conjectures. Those would be for a different book, maybe one on the Bombieri-Vinogradov theorem and its extensions and applications. For the expert, striving to improve the best bound 246, most of this material will be familiar. However the main target audience is beginning researchers, for example graduate students. I have vivid memories of my time at Columbia having to scrap with other grad students for important books held behind the library desk. One could have these only for one hour at a time, completely insufficient for understanding a major proof. To assist this group of potential readers the appendices contain proofs for supporting mathematics such as the spectral theorem for compact operators, Weil’s inequality for curves modulo primes, Bessel functions, the Shiu-Braun-Titchmarsh estimate etc. I have tried to simplify this material down to only what is essential for the work in the chapters, and these have been simplified down to only what is essential for the breakthroughs. But it’s certainly not simple!

Along the way there appeared to be many ways in which the results could be improved. However I did not tarry since after starting, the worst outcome would be for the work not to be completed. Having completed the work, others it is hoped will find paths to take it forward, with or without the text. For this writer, there are other pressing tasks and the Erdos life time limit is not so far off.

What the book is not: it’s not an account of the breakthroughs as a human endeavour. That would be a different book. There is the odd comment here and there which would qualify and some highly abbreviated biographical paragraphs. It is this author’s hope that such a book will be written, and soon before the individual and collective memory of events fades. To this end, on the book’s web page there is a link to the “backstory”, a web page containing an annotated series of time-lines and links to sources which might inspire someone to write up the human story with an absolute minimum of mathematical detail. Because what happened and especially the way it happened is unique, I would say in the entire history of mathematics, an account of the human side of the developments, in the hands of someone with suitable skills and experience, would be of interest I believe to a very wide audience.

As usual mathematical arguments are often difficult to follow and I needed help. This was generously provided especially by Pat Gallagher, Dan Goldston, Yoichi Motohashi and Terry Tao. I was not able to obtain a reply from Yitang Zhang, in spite of repeated requests, other to be sent his image. In the end I did not include more than a summary account of the proof of his extension to Bombieri-Vinogradov’s theorem – a full report of his proof, or better that of Polymath8a, would be part of the other potential book mentioned before. In any case, Maynard, Tao and Polymath8b went so much further than Zhang with their multidivisor/multidimensional method, an approach which seems both accessible and able to be improved.

Which brings me to my final remark: where to next in the bounded gaps saga? As hinted before, the structure of narrow admissible tuples related to the structure of multiple divisors of Maynard/Tao, and variations of the perturbation structure of Polymath8b, and of the polynomial basis used in the optimization step, could assist progress to the next target. Based on “jumping champions” results, this should be 210. But who knows!

]]>Everyone knows that *The* *Principia* was based on the inspiration that struck Newton when the apple struck his head, as you can see from the cartoon above. The thought that passed through his head was as follows:

“Clearly the earth attracts the apple in the same way that it attracts the moon, and the force very likely obeys the inverse square law. I can check this by calculating the acceleration of the moon towards the earth, as determined by its orbit and the length of a synodic month.

“But the moon does not orbit about the centre of the earth but about the barycentre of the earth-moon system. To calculate this I need to know the mass of the moon. What difference would it make if the moon became twice as dense? The tides would become stronger. I can compare the strength of the lunar and solar tides, and hence compare the density of the moon with the density of the sun, and I can compare the density of the sun with the density of the earth as they both have satellites. Now I need pencil and paper…”

Newton’s *anni mirabiles* were 1665-1667, when he was twenty-two to twenty-four years old. In 1666 the University of Cambridge went into lockdown because of the plague, and he retreated to his home base in Lincolnshire to think. This is when his theory of gravity, and so much more, was developed, and the semi-mythical apple fell from the tree.

The current Covid crisis has killed more people than the plague of 1666, which none the less is thought to have killed a quarter of the population of London. The current lockdown leaves academics with vast electronic resources, whereas in Newton’s day there was nothing to do but to think. Is there a lesson to be learned?

He was an outstanding mathematician, physicist, astronomer, historian, theologian, and (as master of the mint) civil servant. But Voltaire, who thought so highly of him, and was the lover of the great Émilie du Châtelet, the only translator of *The Principia* into French, asserted that he was so famous in England because he had a beautiful niece. Princes of Italy came to England in order to set eyes on him, and, for all I know, on his niece.

Reverting to the apple/moon comparison, a great variety of simple ideas come into play, and I intend to concentrate on simple ideas, rather than on the technical details. For now, I ask you some simple questions concerning the tides.

- The sun attracts the earth far more strongly than does the moon. The earth rotates about the sun, not about the moon. So why does the moon cause greater tides on the earth than does the sun?
- As the earth spins on its axis, the moon reaches its highest point in the sky, when it attracts the sea most strongly, approximately once every 24 hours. But we get a high tide approximately once every 12 hours. Why is this?
- Why only approximately every 12 hours?
- Some high tides are higher than others. What other factors may contribute to these discrepancies? Ignoring the weather, you are doing well if you can think of five.

If you are new to these ideas you are in much the same position as Newton, who (it seems) never saw the sea, but sat in his garden thinking.

I should perhaps mention that I am an emeritus professor of pure mathematics at Queen Mary, University of London. I work in algebra, and have had to learn much in order to understand the great breadth of Newton’s masterpiece.

I hope that my understanding has been sufficient for the purposes of the task I have undertaken. I was moved to produce my translation by my feeling that the Cohen-Whitman translation was too opaque, and based on an inadequate understanding of the text. I detail my attitude to their work in the preface to my translation, and acknowledge the help I have received from many people.

I should repeat here my gratitude to Carl Murray, who persuaded me to have my translation published and produced the diagrams, to Wolfram Neutsch, who read the entire manuscript, and saved me from some embarrassing errors, to Niccolò Guicciardini for much learned assistance, and to David Tranah of Cambridge University Press, who fortunately insisted on setting the translation in a bright modern style. I am grateful for his hard work and professionalism which also saved me from a number of errors.

The online annotated translation of *The Principia* (www.17centurymaths.com) by Ian Bruce unfortunately did not come to my attention until my translation was in the hands of C.U.P.

There are cynical reasons for the perpetual arguments—someone benefits. But cynicism aside, is there something we can do to improve public discussion and make positive progress?

The more frankly we interrogate our data, assigning appropriate error limits, the more we can learn from them.

In my engineering classes, I spend quite a bit of time
encouraging students to observe and note down the process of learning. There are things to learn in class, of
course, but it would be a shame not to use the opportunity of learning something
new to teach ourselves about *how* one learns something new. What is that process? Well, we confront a new situation, we ask
questions, we try out solutions or pathways to solution, and we pick something
to try. Then, we see if it works. If it works, we win! If it does not work, we review how we got to
the non-working solution, and we try something else. We are only finished when we find an answer
that works.

How does this anecdote help us to address the problem of stalled public discourse?

Let’s compare public discourse to the STEM classroom. We start with a topic, which is usually something wrong in society that someone asserts needs to be addressed or changed. Let’s use climate change as an example. According to my description of the classroom process, once the topic has been raised, the next step is to ask questions and identify solutions and pathways to solution. That works for public discussion of climate change. The next step is to pick something to try. This seems to be the moment when public discourse breaks down—we can’t agree on what to try.

Why does public discussion break-down at this step while the
scientific-engineering process moves forward?
Well, it would be revisionist history to say that scientific discovery progresses
smoothly. Science is performed by
people, after all, and the same conflicts that beset public discussions rankle
in science too. Germ theory, espoused by
Louis Pasteur, was roundly mocked when presented to the great minds of the 19^{th}
century.

What makes the difference, then? Well, in the case of germ theory, it made a difference that Pasteur was right. Germs do cause disease, and the hygiene protections that the theory suggests do work to reduce disease spread. So, the difference is that there can be hard measurements made and correctly interpreted to lead us to positive progress.

Thus, I arrive at my suggestion for the public discourse problem—hard measurements correctly interpreted. And perhaps a bit of patience. It took years for the germ theory to be accepted. These years were spent taking data, interpreting them or improving them, and seeing what it all added up to.

It may be a bit tough to be patient, given what’s at stake with some of the questions we are debating in the public sphere. One aspect that may help to accelerate the process of getting it right is to encourage widespread familiarity with the nature of measurements and with the uncertainty that is unavoidable in hard measurements. The more frankly we interrogate our data, assigning appropriate error limits, the more we can learn from them. We should not imagine that they are more definitive than they are—we may miss or misstate the truths that they contain. Widespread literacy on the topic of uncertainty in measurements would go a long way in improving public discourse on the topics that divide us.

]]>And there will be more examples of algorithms migrating to the inside of the body, and resulting in unique legal issues. Just consider what is happening now in the development of neuro-prosthesis, such as the development of an artificial hippocampus where memory resides. Clearly, challenging legal issues will arise from such developments. For example, will a person’s memories be susceptible to editing?

And will it be possible to transmit information (such as a commercial or political ad) directly into one’s brain via a wireless connection? This may sound like science fiction, but there seems to be a trend for technology equipped with algorithms to enter the body and to be directed at the brain, in fact, a lot of research is underway to do just that. And of course, the current law will be challenged by such developments. All these examples call for what I term- A Law of Algorithms – which I have been writing about recently. What I have tried to express in this brief essay is that technology and law do not operate independently, and in the case of algorithms, the law in many different areas will be challenged. However, by addressing concerns associated with algorithms from a perspective of law, we may be able to create a more just and equitable society, laudable goals.

]]>