Disease surveillance can be conducted at a variety of levels, from tracking and tracing individual diagnosed cases to monitoring aggregate testing or case data as a measure of disease incidence in a specific or more general population. The former is a medical and epidemiological problem while the latter is both a statistical problem, in the sense of using samples of data taken over time to infer trends in a population, as well as an epidemiological problem if a potential increase is identified.

At its most fundamental, this is an exercise in separating signal from noise, where the goal is to quickly identify an increase in disease incidence (the signal) in the presence of data that will naturally fluctuate over time (the noise). The extremes will generally be clear: an individual who presents with obvious COVID-19 symptoms and tests positive, or a large outbreak with many individuals with symptoms and/or positive test results. Identifying more subtle changes in disease incidence, prior to and in order to head off a large outbreak, is more challenging.

The detection challenge is partly quantitative, where it can be quite difficult to identify a subtle signal amid noisy data. But the challenge is compounded by practical issues related to COVID-19 which a disease that can spread asymptomatically (or nearly so) and for which case outcomes may lag policy changes by weeks. It may be further compounded by differing organizational priorities, where the existence of subtle changes may be disputed and yet where it is critical to quickly identify increases in disease incidence before they become exponential.

Our focus here is on the statistical tools to address the quantitative problem, though note that proper implementation and transparent use of the right tool may also help address some of the other challenges. The fundamental idea is quite simple. Using historical data, the existing or desired disease incidence rate is quantitatively characterized, typically in terms of an average rate and some measure of variability such as the standard deviation. Then future data is monitored in comparison to the historical, where if it is significantly above the historical average, then that would trigger an epidemiological investigation into whether there is an event that warrants some sort of intervention and/or policy change.

The figure below illustrates one approach using COVID-19 case data from Knox County, Tennessee. It uses a moving average and standard deviation from the previous 14 days, less any unusual spikes in the data, to establish a warning threshold (the yellow line) and signal threshold (the red line) at 1.5 and 3 standard deviations above the moving average. In the figure, the yellow and red bars denote counts that have exceeded the warning threshold or the signal threshold, respectively, which is an indication that the count for that day was unusually high compared to observations from the previous two weeks.

The way this is particular surveillance algorithm is implemented is that each day the 14-day moving average and standard deviation is calculated from which new warning and signal thresholds are specified. Then the next day’s case count is compared to these thresholds and appropriate action is taken if the observed count exceeds one or both of them. For example, to the far right in the figure we see that the thresholds for May 24th have been calculated using the data from May 10-23 less the spike on May 11th. The question is where the observed count, when it is observed, falls in relation to that day’s thresholds. (See Illustrative Surveillance Example Using Knox County TN COVID-19 Case Data.xlsx for the Excel spreadsheet with the calculations.)

Note how the figure shows the background incidence fluctuating over time and this is reflected in threshold changes. Because this algorithm bases the decision thresholds on a 14-day average, it allows for this type of variation, though depending on the surveillance goals this may or may not be desired. For example, this algorithm will be very effective at detecting large increases quickly but it will be poor at detecting a slow steady increase in disease incidence. For that, there are other algorithms that are more effective.

Now, while the idea is simple, algorithmic options and implementation details add complexity. For example, the choice of threshold requires making a sensitivity-specificity tradeoff in terms of speed of detection of an increase in incidence versus the rate of false positive signals, a choice that has both practical and perhaps political implications. There are also algorithmic options, where certain algorithms are more appropriate for some types of data and, as we just mentioned, some algorithms are better suited to detect certain types of incidence changes than others. And, there are computational details as well as basic questions about which historical data to use to characterize the “normal” background disease incidence and how to update that information as time progresses.

Unfortunately, we don’t have the space here to address all of these issues. However, important implementation details aside, hopefully this short note has made it clear that the appropriate use of these types of surveillance tools can help frame the process by which decisions are made and help remove subjectivity and seat-of-the-pants decision making. Perhaps most importantly, from a public health viewpoint, appropriate implementation of a good surveillance system can improve data and decision making transparency and thereby increase confidence in the public health system.

**For Further Reading**

Fricker, R.D., Jr. (2013). Introduction to Statistical Methods for Biosurveillance, with an Emphasis on Syndromic Surveillance, Cambridge University Press.

Fricker, Jr., R.D., and S.E. Rigdon (2018). Disease Surveillance: Detecting and Tracking Outbreaks Using Statistics, Chance, **31**, 12-22.

Rigdon, S.E., and R.D. Fricker, Jr. (2019). Monitoring the Health of Populations by Tracking Disease Outbreaks and Epidemics: Saving Humanity from the Next Plague, CRC Press.

Why do we blame algorithms for our woes? Because they push us out of our comfort zone? No doubt. But also because we often agree to use them, not understanding what they really are and how they work. Our dreams and our fears are the consequences of this ignorance. We fear algorithms because we see them as mysterious beings, endowed with supernatural powers, perhaps evil intentions.

We often agree to use them, not understanding what they really are and how they work.

In the book, we clarify the opaque vocabulary often used in this context explaining the basics of this science for a general public. To free yourselves from any magical thinking, to separate legitimate hopes from childish fantasies, justified fears from unfounded anxieties, we invite you on a journey through the world of algorithms. We discuss the digital society and the new human in becoming, enlightening societal and philosophical issues such as the transformation of work, property, privacy… that are often explained confusingly in the media.

We explain how scientific knowledge is impacted by computer science across all fields with big data, machine learning… It is essential for you to get familiar with these notions to understand better the transformations of the world and acquire a more modern viewpoint.

The goal of *The Age of Algorithms* is to make you more aware of the environments you live in, to empower you with your own viewpoint and understanding instead of being frightened by the new technology. This, we believe, will help you improve your life in the digital world.

Algorithms can lead to the best or the worst outcome, but we must never forget that they do not, in themselves, have intention. Human beings have designed them. They are what we want them to be. That is also the message of the book.

]]>

**Figure 1: (a) Mixing of milk and coffee; (b) Cascading process in turbulence; (c) Energy flux Π_{u}(k)**

Let us understand the turbulent mixing of milk in coffee in some detail. The stirring by spoon creates mini cyclone, technically called *eddies*, of the size of the cup (L). This large eddy generates smaller eddies of size L/2, that in turn generates even smaller ones of size L/4, and so on (Figure. 1(b)). This process continues till the smallest eddy of size η, where the fluid energy is converted to heat.

It is convenient and customary to compute the energy transfer from one scale to the next in Fourier space. The wavenumbers are denoted by k, which is inverse of length; and the energy flux by Π_{u}(k). In the coffee example, since there in no energy injection in the intermediate range, the same energy flux flows down the inertial range where viscous dissipation is weak. Hence, Π_{u}(k) = const., or dΠ_{u}(k)/dk =0 (see Figure 1(b,c)). The corresponding energy spectrum is k^{-5/3}. This is Kolmogorov’s theory of turbulence.

Now imagine a strange experiment in which we mix polymers in coffee (not for drinking!). Polymers are like small springs, and they extract a fraction of kinetic energy flux, Π_{u}(k). Hence Π_{u}(k) decreases with k, or dΠ_{u}(k)/dk < 0. On the other hand, if we heat up coffee, then the thermal energy enhances Π_{u}(k), thus leading to dΠ_{u}(k)/dk > 0. We illustrate these two cases in the Figure 1(c). The above examples are illustrations of real-life flows—for example, Π_{u}(k)/dk < 0 in magnetohydrodynamic turbulence and in flows with polymers. But, Π_{u}(k)/dk > 0 in thermal convection and in shear flows. Thus, energy flux helps model these flows, as well as provide important inputs, e.g., turbulence drag reduction in polymeric flows and in magnetohydrodynamics.

In the monograph I describe basics of energy transfers and fluxes in turbulence, and then employ them to describe scaling and other properties of turbulence in hydrodynamics, magnetohydrodynamics, passive scale, buoyant flows, rotating flows, active scalars and vectors, compressible flows, etc. Many of the above ideas have been discovered by our group, hence the book brings out a unique perspective on these topics.

Cambridge University Press has produced an attractive book that, I hope, would be very useful to graduate students, scholars, and researchers.

**About the Author:**

**Mahendra K. Verma**, a leading researcher in the field of turbulence, holds Sanjay Mittal Chair at the Physics Department of Indian Institute of Technology Kanpur, India. He is a recipient of Swarnajayanti fellowship, INSA Teachers Award, and Dr APJ Abdul Kalam Cray HPC Award; and a fellow of INSA. In addition to this book, he has authored books “Introduction to Mechanics” and “Physics of Buoyant Flows: From Instabilities to Turbulence”. His other research interests include nonlinear dynamics, high-performance computing, and non-equilibrium statistical physics.

]]>

The teachers of these new students taught two subjects. One was the classical geometry of the Greeks, less useful than it might appear, but universally held up as a model of reasoning in which rigorous argument lead from clearly stated first principles to final conclusions. The other was modern mathematics, in particular calculus, which lacked any such clear structure.

Certainly modern mathematics dealt with numbers but numbers seemed to be drawn from a rag bag of different objects.

Certainly modern mathematics dealt with numbers but numbers seemed to be drawn from a rag bag of different objects. There were numbers like 1 and 2 which everybody understood, and fractions like 1/3 or 3/9 which were the same number and 2/3 and 1/4 which were different numbers. Then there was 0 which was a number like every number except that you were not allowed to divide by it. To these were adjoined the negative numbers which were every bit like positive numbers, provided that you remembered that the product of two negative numbers was a positive number. Negative numbers had no square root unless you allowed mysterious entities called complex numbers which were also exactly like other numbers except when they were not (for example complex numbers usually had three complex cube roots). Finally there were objects like π which were not fractions, or like Euler’s ϒ which may or may not be a fraction, but which everybody agreed were bona fide numbers to be treated exactly the same as other numbers.

It is doubtful if this worried many students then who, like most students now, wished just to pass their exams, get a good job and enjoy themselves. However it did worry some of their professors and during the course of the 19th century, with many fits and starts, they completed the difficult task of rigourising calculus and the linked task of providing a coherent account of the numbers used in calculus.

In 1930, Landau published a little book *Foundations of Analysis* setting out this account at undergraduate level. Landau’s much loved text is still in print but, as Landau says, is written `in merciless telegraph style . . . as befits such easy material’.

There is, I think, room for a more relaxed account which gives some idea of where the ideas come from and why they are used in the way they are used. My book is an attempt at such an account.

]]>Today’s students in Earth and environmental sciences face a transition with the increased use of new techniques and computer models to analyze, synthesize, and understand large spatial and temporal data sets. To use these models and techniques effectively requires at least a passing acquaintance with the mathematical concepts and methods that underlie them. However, many students are either intimidated by the subject or have not used the mathematics they know for many years and so find themselves in need of a gentle reminder.

I wanted to write a textbook that presents an unintimidating introduction to the basic mathematical techniques that students will likely encounter. The material in the book is based on three courses that I teach at the undergraduate and graduate levels, and which cover quantitative methods and basic oceanographic and climate modeling. These courses are designed with many opportunities for students to develop and test their understanding by working through problems. These vary from those that fill in the steps of a worked example to more complex problems requiring the students to formulate equations and develop an appropriate path to their solution. Although I teach mostly students from marine sciences, students from other disciplines such as atmospheric sciences, ecology, geology, and even economics have taken these courses. This has led me to develop examples and assignment problems that cover a range of disciplines in the Earth and environmental sciences, and many of these have found their way into the book.

A computer is now an essential scientific tool. Many mathematical methods have been implemented in a wide variety of programming languages and a series of Matlab and Python codes illustrating some of these have been written to supplement the material in the book.

I hope that this book gives the students and researchers the mathematical tools they need to better understand complicated data analysis techniques and the inner workings of computer models.

**Mathematical Methods in the Earth and Environmental Sciences**

by Adrian Burd, *University of Georgia*

]]>

**Mathematics rivals theology when it comes to ontological difficulties**

Mathematics rivals theology when it comes to ontological difficulties; consequently there are today three very different philosophical positions that can be taken.

Platonists assert that there is an intangible but intelligible world of mathematical objects, and that the business of the mathematician is to explore this world, capture what it contains for further study, and report back; therefore mathematicians do not create or invent, but discover.

Formalists assert that mathematics is the manipulation of meaningless symbols according to an arbitrary set of rules, the only condition being that the rules must be consistent. Within such a scheme the symbols have no referents; but these, together with other useful appurtenances such as ideas of truth, falsehood and proof, are provided by a meta-mathematical language that forms no part of mathematics proper. This makes mathematics a very unappealing discipline because, as G. H. Hardy remarked, virtually everything that a mathematician finds worthwhile and interesting turns out not to be part of mathematics at all.

The third position is that of the intuitionists, also called constructivists. They claim that the objects of mathematical contemplation have no objective reality, but rather are produced, or constructed, in the human mind, and therefore that mathematics can only encompass entities for which a method of construction can be proposed. This leads them to deny the existence of much that most mathematicians take for granted, such as transcendental numbers and the law of the excluded middle, which states that if not-P is false, then P is true. In particular, the immensely fruitful method of proving P by showing that not-P leads to a contradiction is unavailable to an intuitionist. However, intuitionism is probably the most logically supportable of the three systems and is therefore popular with philosophers.

It is not surprising that most working mathematicians are unconcerned with philosophical niceties, and just get on with the business in hand. Nevertheless, the terminology that they, and we, cannot avoid using—in teaching, writing and researching—is that of mathematical Platonism. Regardless of one’s philosophical standpoint, the entities on which mathematical reasoning operates must be referred to as if they existed, and so a mathematician’s work appears to be that of increasing our knowledge of these entities, and the relations between them. The history of mathematics, therefore, is usually presented as the history of the steady accumulation of this knowledge, and this not only implies a philosophical position, but also tends to impose on the narrative a historical set of definitions and distinctions as to what does or does not constitute mathematical knowledge.

This suggests the possibility of an alternative approach in which the spotlight is trained on the activities undertaken by mathematicians, an approach which I have adopted in my book *From Servant to Queen.* The aim is no longer to elucidate the development of mathematical knowledge, with the mathematician taking the role of a mathematical knower; instead mathematics is treated as a practice that can be evaluated and described from external observation and evidence.

For this approach to have value as history, it must be applied to a collective of mathematicians who were bound together in some way; in my book I consider nineteenth-century mathematicians in Britain. Writing history in such a way is also consonant with the characterisation of mathematicians that obtained in former times, when they were regarded not as discoverers and knowers of mathematical truth, but as people who could do useful and interesting things with the aid of a toolkit that included mathematical techniques. Furthermore, in Victorian times there were significant changes to the vocabulary that was used to describe mathematics and its practitioners; so asking why these changes occurred, whom did they benefit, and whether they reflected new mathematical practices, can all help to illuminate history in a manner that is not possible if we impose on the past a conception of mathematics as a body of knowledge, and of mathematicians as expositors thereof.

Not everything that happened in the past has a place in history.

Not everything that happened in the past has a place in history. Everywhere there need to be linkages by which people, circumstances and events can be made relevant to a wider and cohesive whole; failure in this regard yields a mere chronicle. The internalist nature of most history of mathematics is well-known, but an historical treatment that locates mathematical activity in wider society and culture can provide the linkages that transform a chronicle into true history.

To read more from John Heard on the topic of nineteenth century mathematics check out his book *From Servant to Queen: A Journey through Victorian Mathematics.*

There are two very significant consequences of this. The first is a levelling of the playing field so that (what start out as) smaller businesses can now compete disruptively with larger rivals. This can be seen in the finance market, where smaller ‘Fintech’ businesses are challenging existing ‘bricks and mortar’ banks for the projected $tr mobile payments revenues; and also, in the emergence of AirBnB, Uber and other platforms of the ‘sharing economy’ that have disrupted the worlds of accommodation, transport, finance, and so on.

And the second consequence, which in part develops from the first, is the emergence, within the past 20–30 years, of tech titans such as Facebook, Amazon, Apple, Netflix and Alphabet (Google), referred to generically as the FAANGs, whose economic and social power and influence is ubiquitous across the developed world.

The rapid growth and high productivity levels within the sector have demonstrated the economic benefits of entrepreneurship. Furthermore, research has shown that Science, Technology, Engineering & Maths (STEM) entrepreneurs in particular build on innovative foundations to create sustainable businesses.

My first experience of founding a digital technology business was more than 30 years ago. At that time, the vast majority of students graduating with STEM degrees would unquestioningly expect to join one of the major technology or service businesses that dominated the sector. Interestingly – apart from notable exceptions such as IBM, BAE Systems and Siemens – few of the tech giants of that day have survived. University courses reflected this situation by focusing their curricula and teaching methods on building core scientific and technical knowledge.

Over the past few years I have been active as a mentor and investor working with a range of start-ups and university spin-outs, through the Royal Academy of Engineering’s Enterprise Hub and some excellent privately-funded accelerator programmes. The vibrancy of the UK tech start-up sector is impressive and gives cause for optimism. Yet in London – a global hub of entrepreneurial activity – more than 30% of tech start-ups are currently struggling to recruit the talent they need to achieve their potential.

Against this backdrop it is interesting to review the ways that universities have adapted to the changes in the economy and the tech landscape; how are STEM students being equipped to start their own businesses or to join exciting early-stage tech companies and address their skills shortages, and how have curricula and teaching methods changed?

Recent experience of teaching business innovation and entrepreneurship to STEM undergraduates and graduate students has provided me with three insights:

- Students studying STEM disciplines readily spark at the opportunities offered by the fast-moving world of tech business, and are keen to apply their inherent analytical skills to business innovation and design.
- A significant majority of STEM students graduate with only a rudimentary and anecdotal appreciation of how digital techniques, technologies and processes apply to business and entrepreneurship, despite a formal education which includes a wealth of relevant digital knowledge and skills. Recent UK research discovered that only 10% of engineers and 5% of scientists (1% of physicists) are exposed to entrepreneurship education. There are of course some notable institutions that have embraced the change wholeheartedly – to their significant benefit.
- While there are many excellent books that address innovation and entrepreneurship, there is real need for a student textbook that coherently and in a structured and practical way addresses business for those planning to enter the modern digital world; a book that inspires, encourages and supports graduates to become job creators rather than merely to seek employment.

Research has shown that entrepreneurship can be learned and developed. That is not to say that everyone has the potential to create a $1bn business, but that, given a structured and practical introduction, the majority of STEM graduates will be able to contribute significantly to a start-up, early-stage development, or innovation within a larger organisation. And this means it is possible to address the skills shortage that currently frustrates both businesses and the wider economy, and also to address the related US problem of rapid technical obsolescence within STEM graduates.

In a nutshell, this was my motivation for producing *Digital Innovation and Entrepreneurship*: a book that bridges the gap between formal STEM education and the digital business world; and provides a way of introducing innovation and entrepreneurship as core components of the skillset of the modern digital professional.

Bridging this gap is important to STEM students because the digital economy now encompasses more than half of the world’s population, and succeeding in this sector increasingly demands an effective balance of knowledge and skills in both business and technology. It is also important to higher education institutions in an increasingly competitive environment in which students demand courses that better reflect the world they expect to join. Lastly, it’s important to the economy of every advanced nation that the continuing vibrancy of its tech sector is fuelled by well-equipped and motivated STEM graduates.

**Photo credits: Knight Foundation, Sandeepnewstyle**

In most departments the training starts with courses on ‘Mathematical Methods for Physicists’, where students learn the basics of integration, divs and grads, urgently required in the first year curriculum. But the role of mathematics in physics transcends that of a collection of methods. At universities where this truth is reflected in the curriculum, conceptual teaching is often outsourced to departments of mathematics. After all, who would be better prepared to teach mathematical concepts than mathematicians themselves?

The above system works, otherwise it would not be implemented at a majority of academic institutions. The question is if we can do better. We believe yes, and that the key to a modernized and more pedagogical approach to teaching mathematics in physics lies in a *stronger integration of conceptual and methodological elements in the mathematics education of physicists by physicists*.

What we have in mind is best explained on an example, the introduction of *vectors* early in the curriculum: the average beginner’s course starts from a hands-on introduction of vectors in *R ^{n}*, with emphasis on

There is a better way of getting started. At the very beginning, invest two or so weeks into a systematic, bottom-up discussion of algebraic foundations — sets, groups, number fields, linear spaces. Students trained in this way ‘see’ groups and vectors everywhere, in functions, matrices, *R ^{n}* and

Similar things could be said about integration theory, vector analysis, (differential) geometry, and other key disciplines of mathematics – conceptual and systematic introductions are rewarding investments which quickly pay off in fast and sustainable progress of students. Our belief in this principle is backed by experience. We have taught the reformed lecture course underlying our textbook about 10 times at two universities. Students trained in this way generally showed higher levels of confidence and proficiency in mathematics than those who went through the standard system. Remarkably, average and weak students are among those who benefit most. For them, it becomes easier to understand connections otherwise seen only by the best of the class. It should also be stressed that emphasis of mathematical concepts does not imply more abstraction. Yes, it does leads to more ‘hygiene’ in notation and to a language appearing to be ‘more mathematical’ than what is standard in physics courses. However, these elements are anchored in intuitive explanations, and hence aren’t perceived as abstract. They support students’ understanding, including that of concurrent courses in pure mathematics.

Encouraged by our uniformly positive experience we suggest a teaching reform at large, not just at our own universities. This was the principal motivation for the substantial work we put into converting our course into a textbook. It is meant to provide a template for what we hope may become a more rewarding introduction to the mathematics needed in contemporary physics.

]]>I had been interested in RH for some time, studying the zeta function through flows such as ds/dt=xi(s), which provided an equivalence. However this work, which had a topological basis, ‘’hit the wall” at the point where the structure of the flow near an essential singularity appeared to be important. The underlying theory was not available, and in the circumstances, I was not able to develop it.

A visit to the University of Waikato by Tim Trudgian stimulated work together on aspects of Robin’s inequality and its RH equivalence. In addition to his sterling detailed work on Volume One Chapter 7, his own published work improving Turing’s method for zeta zero analysis was of great value in many chapters.

I approached CUP at some stage near the completion of a draft of volume 1 and they showed interest. However the expert feedback they received was mixed – not only had volume one not covered some of the most valuable equivalences to RH, it did not cover GRH. This was considered to be much more useful than RH for applications, and the idea of two volumes took shape.

Regarding Cambridge, I had been impressed with their expertise and dedication to publishing good mathematics when I worked with them, supplying an appendix and software for Dorian Goldfeld’s book *Automorphic Forms and L-functions for the group GL(n,R) *(Cambridge, 2006). The new experience writing “Equivalents” showed that this was no exception.

The writing process did not always go smoothly. Some parts, including whole chapters in one case, were scraped. I decided that the details were either too technical or would be too taxing for the reader. My conceptualized target average reader was a graduate student considering potential research problems in pure mathematics and looking for accessible problems. I avoided using results which were at the pre-print stage at the time of writing. This meant sometimes leaving out some published results which depended on unpublished work.

For volume one, the seminal paper of Rosser and Schoenfeld of 1962, and other related papers, provided particular organizational challenges. For volume two, I spent a long time working with Zagier’s group representation equivalence. Eventually I decided to give up. It would be too difficult to give the average reader an adequate background in the specialized theory. In addition Zagier’s method was not able to be extended to number fields.

Given this graduate student target audience, I included quite a lot of background material. For example the chapter on numerical estimates for arithmetic functions, and work of Erdos and others on abundant numbers in volume one. In volume two, an extensive set of appendices provided proofs for the more specialized results referred to in the body of the text, which the reader might not necessarily meet in graduate courses.

I was often asked whether I had (by now) solved RH! Writing a tome of this size does not leave too much energy for such grandiosity, but I did twice believe I might have disproved RH. This was while writing volume two, once when considering integral equations, namely the method of Sekatskii, Beltraminelli ad Merlini in Volume Two Section 8.3, and once when developing examples for Weil’s explicit formula in Volume Two Section 9.5. In both cases the approach came to nothing.

After the volumes were published I created a website for Errata and notes, GRHpack and RHpack. I have had an excellent volume of feedback giving corrections and other comments which have been included or will be once time permits – this is especially welcome. Some folk even indicated they had been right through both volumes!

As expected, equivalents to RH and GRH continue to evolve. In late 2017, the University of Waikato had a visit from Ken Ono who gave a fascinating lecture related to the Jensen polynomial equivalence of Polya from 1927, namely that RH is equivalent to all of the Jensen polynomials of the Xi function being hyperbolic. He described a discovery Don Zagier, Michael Griffin, Larry Rolen and himself which, among other advances, shows that for each degree all but a finite number of the Jensen polynomials are hyperbolic. This work is being written up and will be referenced in the “Errata and notes” relating to Volume Two Section 4.4. when a pre-print appears on ArXiv.

And In February 2018 Brad Rogers and Terence Tao published on ArXiv an article entitled “The de Bruijn-Newman constant is non-negative”, giving the RH equivalence Lambda=0. A full report on this work would make a nice addition to Volume Two Chapter 5. Both of the works of Rogers/Tao and Griffen/Ono/Rolen/Zagier will be included in a second edition, should one be published.

Find out more about Kevin Broughan’s 2-volume work *Equivalents of the Riemann Hypothesis *here*.*

The famous American physicist, Richard Feynman, was born a 100 years ago on the 11^{th} May in 1918, and it is worthwhile spending a few moments reflecting on what makes his achievements so enduring. To the general public, Feynman first became widely known with the publication in 1985 of a best-selling collection of stories from his life in physics called *‘Surely You’re Joking, Mr Feynman’*. The title refers to an incident in his introduction to graduate school at Princeton at an event called the ‘Dean’s Tea’. This was unfamiliar territory for Feynman who had grown up in Far Rockaway, a neighborhood in the New York City borough of Queens, and who had gone to MIT for his undergraduate degree. But it was Feynman’s participation in the presidential commission to investigate the Space Shuttle Challenger disaster in 1986 that made him one of the best-known physicists in the world. At a public meeting of the Commission, Feynman famously performed a public demonstration of the cause of the shuttle disaster using a rubber O-ring, a clamp and a glass of ice water. It is still worthwhile looking at the video of the event on YouTube .

To physicists, Feynman is revered for many reasons but is probably best known for the ‘Feynman diagram’ approach to calculations of quantum field theory. Feynman’s approach to field theory calculations was pictorial and in marked contrast to the more formal mathematical approach of his fellow Nobel Prize Winner, Harvard professor Julian Schwinger. As Schwinger later said:

*“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”*

“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”

Feynman diagrams are now an integral part of theoretical physics. Ironically, Freeman Dyson, the person who proved that Feynman’s intuitive approach was actually the same as Schwinger’s more mathematical approach, never won the Nobel Prize although he was instrumental in getting Feynman’s space-time approach accepted by people like J. Robert Oppenheimer and the rest of the physics elite.

When I was an undergraduate student in Oxford I first came across Feynman through his famous ‘Red Books’ – the three-volume set of his ‘Lectures on Physics’. Feynman dedicated two years of his life to creating a two-year introductory course in physics for Caltech students that covered most of modern physics – mechanics, kinetic theory, electromagnetism and quantum mechanics. Although many of the students reportedly found the lectures hard-going despite Feynman’s inimitable style of lecturing, his Lectures on Physics have become a staple part of the education of much of the physics faculty around the world.

After completing a D.Phil (the Oxford equivalent of a Ph.D.) in theoretical physics in 1970, I was excited to be awarded a Harkness Fellowship to go to Caltech for two years as a post doc. Just before I left Oxford, Feynman had published a paper on ‘partons’ – an intuitively appealing picture of the proton as made-up of point-like constituents. There was also great interest in the new experimental results from SLAC, the Stanford Linear Accelerator Centre, on ‘deep inelastic scattering’ of electrons from protons. Feynman had originally only applied his parton ideas to proton-proton scattering but on a visit to SLAC he had recently given a seminar showing how the new deep inelastic scattering results could be understood using his parton model of the proton.

I arrived at Caltech in 1970 feeling both trepidation and excitement and it was like moving from the slow lane to the fast lane on the freeway. At Oxford we had sort of absorbed the idea that the physics world revolved a little around Oxford, but at Caltech, it was clear that, to a first approximation, the UK, Europe and the rest of the world were largely irrelevant. This was the ethos of the theory group at Caltech with its two Nobel Prize winners, Richard Feynman and Murray Gell-Mann. In actual fact, my old professor in Oxford, Dick Dalitz, was one of the few physicists who had taken seriously the proposals by Gell-Mann, and independently, by George Zweig, then a professor at Caltech, for quarks as fundamental constituents of matter. Dalitz had developed a detailed quark model for baryons and mesons and showed that that this had remarkable power to reproduce many features of the hadron spectrum found by experiment. Despite its clear theoretical inconsistencies, Dalitz regarded his explicit quark model as similarly useful as Bohr’s equally inconsistent model of the atom. Just like Bohr’s model, Dalitz was convinced that the quark model pointed the way to some deep truths about Nature.

Feynman was never one to take other people’s calculations on trust and so he had developed his own version of the quark model with graduate student, Finn Ravndal, and post doc, Mark Kislinger. Perhaps because of his work with them, Feynman often used to have lunch with the graduate students and post docs at the Caltech campus cafeteria, universally known as ‘The Greasy’. It was here that I first heard versions of Feynman’s stories that he and fellow bongo drummer, Ralph Leighton, later wrote up for publication. The intellectual rivalry between Gell-Mann and Feynman was legendary and Gell-Mann frequently grumbled about what he regarded as Feynman’s ‘myth making’.

My most intimidating moment at Caltech was at an informal lunch-time lecture I had agreed to give to the experimental particle physicists. The group was led by new Nobel Prize winner, Barry Barish, with Frank Sciulli and they had just been awarded funding for an important experiment on deep inelastic neutrino scattering. Feynman’s parton explanation of deep inelastic electron scattering had been written up – with due acknowledgement to Feynman – by ‘BJ’ Bjorken and Manny Paschos who had both attended his lecture at SLAC. All I was going to do in my lecture was to explain how the parton model could be applied to neutrino scattering. However, you can imagine my surprise when I arrived to give my talk to see Feynman sitting in the audience. In fact, all went well until I was nearing the end of the lecture when Feynman jumped up and said:

* “Stop. Draw a line. Everything above the line is the parton model – below the line are just some guesses of Bjorken and Paschos.”*

As I rapidly became aware, the reason for Feynman’s sensitivity on this point was that Murray Gell-Mann was going around the Lauritsen building at Caltech growling things like *“Anyone who wants to know what the parton model predicts needs to consult Feynman’s entrails.”*. The point that Feynman was making was that all the results above the line in my seminar were identical to predictions that Murray had derived using fancier algebraic techniques. Feynman just wanted to dissociate his parton model predictions from some of the wilder parton model predictions of others. My lecture was just an opportunity for him to do that.

What made a Feynman lecture unique? The well-known Cornell physicist, David Mermin, once said “*I would drop everything to hear him give a lecture on the municipal drainage system”*. Why was this? An LA Times editor captured the essence of a Feynman lecture with the words:

*“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”*

“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”

The article went on to say:

*“No matter how difficult the subject – from gravity through quantum mechanics to relativity – the words are sharp and clear. No stuffed shirt phrases, no ‘snow jobs’, no obfuscation.”*

In his Nobel Prize lecture, instead of giving a talk about the beautiful Feynman diagram framework he had created, Feynman chose to show some of his miss-steps along the way to his eventual success:

*“That was the beginning and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know too much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held by this theory, in spite of all the difficulties, by my youthful enthusiasm.”*

What of Feynman’s legacy today? In 1981, at a conference at MIT, Feynman gave a lecture in which he asked the question *“Can physics be simulated by a universal computer?”* He then answered his question with the statement:

*“I’m not happy with all the analyses that go with just classical theory, because Nature isn’t classical, dammit, and if you want to make a simulation of Nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem.” *

Feynman then put forward an example of a quantum computer and now, over 35 years later, physicists and engineers all around the world are seriously trying to build and operate such a computer.

Finally, Feynman was always passionate about the need for what he called ‘utter scientific integrity’. In a commencement address to Caltech students in 1974 he said:

*“Learning how not to fool ourselves is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.” *

In his fine biography of Feynman, James Gleick memorably summed up Feynman’s philosophy towards science with the words:

*“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”*

“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”

Tony Hey

Kirkland, Washington

11^{th} May 2018

Richard Feynman write the Prologue and Epilogue to Tony’s book ‘The New Quantum Universe’ Second Edition, read these chapters for free on Cambridge Core.

]]>