The first image from the Event Horizon Telescope, centered on the nucleus of the giant elliptical galaxy M87, does not show the shadow of the black hole’s event horizon per se. What it does show is a region somewhat larger than the horizon, where spacetime is so distorted that photons can go into orbit around the black hole without either plunging inward or flying outward. For a non-rotating black hole, this radius is 1.5 times that of the event horizon; it is somewhat smaller if the black hole is rotating. From the perspective of a distant observer, however, this photon orbit appears magnified nearly a factor of 2 by the “gravitational lensing” effect. The observed size of the shadow is consistent with the predictions of General Relativity if the black hole has a mass of about 6 billion suns, as had been deduced previously from a variety of other observations.

To see a shadow, one must first have light, and in the case of M87 this light (in the form of millimeter-wavelength radio waves) is produced in a hot torus of gas circling the black hole and spiraling in. The image appears lopsided because rotation boosts the radiation on the side approaching us, through the relativistic version of the Doppler effect. The energy source for this light is the gravitational energy released by this gas before plunging through the horizon. For the shadow to be visible, this gas has to be dense enough to produce sufficient illumination from behind and around the sides of the black hole, without being so dense that it obscures the shadow from the front. Just such a fortuitous combination of circumstances was predicted theoretically by efforts to understand the powerful jets of gas that emerge from the nucleus of M87 and travel across more than a hundred thousand light-years, boosted by the spin of the black hole.

The current configuration of the Event Horizon Telescope has neither the sensitivity nor the angular resolution to study the gaseous torus and its jet launching pad in detail, but this should improve in the future with the addition of more antennae (i.e., more collecting area), higher observing frequencies, and possibly even an antenna or two orbiting in space. This lumpy but still relatively smooth ring should then begin to reveal what a turbulent and violent place it surely is. What we can see so far tells us relatively little new about the behavior of matter this close to a black hole. But this first image does provide dramatic confirmation of some of the most astonishing predictions of the General Theory of Relativity.

]]>There are two very significant consequences of this. The first is a levelling of the playing field so that (what start out as) smaller businesses can now compete disruptively with larger rivals. This can be seen in the finance market, where smaller ‘Fintech’ businesses are challenging existing ‘bricks and mortar’ banks for the projected $tr mobile payments revenues; and also, in the emergence of AirBnB, Uber and other platforms of the ‘sharing economy’ that have disrupted the worlds of accommodation, transport, finance, and so on.

And the second consequence, which in part develops from the first, is the emergence, within the past 20–30 years, of tech titans such as Facebook, Amazon, Apple, Netflix and Alphabet (Google), referred to generically as the FAANGs, whose economic and social power and influence is ubiquitous across the developed world.

The rapid growth and high productivity levels within the sector have demonstrated the economic benefits of entrepreneurship. Furthermore, research has shown that Science, Technology, Engineering & Maths (STEM) entrepreneurs in particular build on innovative foundations to create sustainable businesses.

My first experience of founding a digital technology business was more than 30 years ago. At that time, the vast majority of students graduating with STEM degrees would unquestioningly expect to join one of the major technology or service businesses that dominated the sector. Interestingly – apart from notable exceptions such as IBM, BAE Systems and Siemens – few of the tech giants of that day have survived. University courses reflected this situation by focusing their curricula and teaching methods on building core scientific and technical knowledge.

Over the past few years I have been active as a mentor and investor working with a range of start-ups and university spin-outs, through the Royal Academy of Engineering’s Enterprise Hub and some excellent privately-funded accelerator programmes. The vibrancy of the UK tech start-up sector is impressive and gives cause for optimism. Yet in London – a global hub of entrepreneurial activity – more than 30% of tech start-ups are currently struggling to recruit the talent they need to achieve their potential.

Against this backdrop it is interesting to review the ways that universities have adapted to the changes in the economy and the tech landscape; how are STEM students being equipped to start their own businesses or to join exciting early-stage tech companies and address their skills shortages, and how have curricula and teaching methods changed?

Recent experience of teaching business innovation and entrepreneurship to STEM undergraduates and graduate students has provided me with three insights:

- Students studying STEM disciplines readily spark at the opportunities offered by the fast-moving world of tech business, and are keen to apply their inherent analytical skills to business innovation and design.
- A significant majority of STEM students graduate with only a rudimentary and anecdotal appreciation of how digital techniques, technologies and processes apply to business and entrepreneurship, despite a formal education which includes a wealth of relevant digital knowledge and skills. Recent UK research discovered that only 10% of engineers and 5% of scientists (1% of physicists) are exposed to entrepreneurship education. There are of course some notable institutions that have embraced the change wholeheartedly – to their significant benefit.
- While there are many excellent books that address innovation and entrepreneurship, there is real need for a student textbook that coherently and in a structured and practical way addresses business for those planning to enter the modern digital world; a book that inspires, encourages and supports graduates to become job creators rather than merely to seek employment.

Research has shown that entrepreneurship can be learned and developed. That is not to say that everyone has the potential to create a $1bn business, but that, given a structured and practical introduction, the majority of STEM graduates will be able to contribute significantly to a start-up, early-stage development, or innovation within a larger organisation. And this means it is possible to address the skills shortage that currently frustrates both businesses and the wider economy, and also to address the related US problem of rapid technical obsolescence within STEM graduates.

In a nutshell, this was my motivation for producing *Digital Innovation and Entrepreneurship*: a book that bridges the gap between formal STEM education and the digital business world; and provides a way of introducing innovation and entrepreneurship as core components of the skillset of the modern digital professional.

Bridging this gap is important to STEM students because the digital economy now encompasses more than half of the world’s population, and succeeding in this sector increasingly demands an effective balance of knowledge and skills in both business and technology. It is also important to higher education institutions in an increasingly competitive environment in which students demand courses that better reflect the world they expect to join. Lastly, it’s important to the economy of every advanced nation that the continuing vibrancy of its tech sector is fuelled by well-equipped and motivated STEM graduates.

**Photo credits: Knight Foundation, Sandeepnewstyle**

One concrete example – that stood the test of time – is the Millennium Simulation project from the Virgo consortium[2], where collisionless, cold dark matter gravitationally interacted throughout cosmic history, in the then largest ever N-body simulation. The model turned out surprisingly accurate in its predictions for how galaxies form and cluster, despite the fact that it was completely deprived of the universe most abundant, known state of matter: the plasma state. The decades that passed since this breakthrough result witnessed the inclusion of hydrodynamical interactions (incorporating baryonic matter, influenced gravitationally by the elusive dark matter) and in the most recent efforts: magnetohydrodynamics.

In contrast to popular belief, magnetohydrodynamics (MHD) is not merely adding a Lorentz force to the momentum equation, or passively advecting pre-existing magnetic fields. Instead, MHD does justice to that part of our universe which dominates our understanding: the intricate multi-scale and long-range behavior of plasma physics. This is classical physics, revisited, joining everything we care for at scales intermediate between the quantum realm, and the regime in which we rely on ‘dark’ components to make the models work. In between these two extremes, an enormous range of scales – from the size of atoms to those of stars and galaxies – benefits from the powerful scale-invariant MHD description that covers precisely this gap. This scale-invariance implies that MHD also governs plasma physics at human scales, which we will harvest in our future power stations running on controlled nuclear fusion (pursued in large-scale international efforts like ITER[3]). This link between laboratory and astrophysical research is a major advantage, where theoretical predictions for waves and instabilities touch base with physical reality through measurement.

The MHD model turns out gifted with mathematical beauty akin to quantum theory: its spectrum of eigenfrequencies and corresponding natural vibrations is organized, with Nobel-prize winning Alfvén waves at its very center. These eigenoscillations can be witnessed in operational fusion-aimed experiments, and they agree with theory, which must account for the donut-shaped magnetic cage created for the plasma. Magneto-seismology will undoubtedly become the natural descendant for seismological studies of stellar structure, solar and stellar coronae, or accretion disk dynamics. Since the solar interior is already known to promille accuracy – thanks to seismological inversions which ignore magnetic fields – time is ripe to investigate how combined ingredients of shear flows, rotation and magnetization modify the plasma eigenstates. In accretion disks about forming stars or compact objects, central to our current focus on exoplanet systems or on getting the first images of black hole shadows, modern astrophysics already realized that magnetic fields are vital to drive accretion and to launch collimated, jetted outflows.

MHD theory has also been the main backbone for solar physics, studies of turbulence and magnetic reconnection, as well as magnetospheric and heliospheric investigations. MHD explains how our Earth’s magnetic dipole operates and protects us from the supersonic solar wind, through generating our magnetosphere. Our current society, relying heavily on GPS and telecommunication, is vividly aware of the havoc that can result from a powerful solar flare, and is rightfully investing in MHD-based predictive efforts for space weather alerts. Plasma physics and MHD modeling are at the forefront of High-Performance Computing efforts, and already demonstrated that one can model Sun-to-Earth solar coronal mass ejections (see e.g. the cover image of Magnetohydrodynamics of Laboratory and Astrophysical Plasmas[1]) faster than real time. Similar breakthroughs have been realized in modeling solar prominences, which condense through radiative losses in the million-degree corona. More energetic processes, incorporating Einstein’s theory of special relativity, require accounting for the full, unmodified set of Maxwell equations, and relativistic MHD successfully reproduced multi-wavelength views we share on past cosmic explosions, such as the Crab pulsar wind nebula.

The most recent excursions of MHD theory and simulation are at the heart of the latest efforts to understand the new multi-messenger approach to our universe, where gravitational waves opened up a new observing window, next to traditional electromagnetic signals. General relativistic magnetohydrodynamics – GRMHD in short – is actively used to mimic the most powerful explosions in our universe (gamma-ray-bursts) resulting from merging neutron stars. The combined equations of general relativity with a (multi-)fluid-based plasma description contain all signals we can capture through our telescopes or gravitational wave detectors, and hence represent a powerful toolbox for generations to come.

]]>The last decade has seen a dramatic growth in research on applications of magnetic nanoparticles (MNPs) as evidenced by the increasing numbers of plenary sessions and conferences dedicated to them, such as the biennial “Magnetic Carriers”. Of course researchers interested in any of the many aspects of MNPs will benefit most from attending these meetings and hearing from the pioneers of the field in person. However, as the field of MNPs expands, attending all of these conferences becomes a daunting prospect. In addition the increasing number of research fields discovering uses for MNPs has created a clear need for a handbook to be compiled to act as an interdisciplinary lexicon.

The goal of this monograph is to provide a first point of reference for the design, synthesis and application of MNPs in biosensing and medicine, not only for newcomers, but also for established scientists looking for potentially new applications of their research. This book is written by world leading experts and pioneers including Urs Hafeli for targeted drug delivery, Dennis Bazylinski in the research of magnetotactic bacteria and Paulo Freitas for MR based biosensors, to name but a few.

The eight chapters in this book cover a diverse range of disciplines that together define biomedical applications of MNPs. In Chapter 1 a concise overview of the theory and application of magnetism as well as the properties of magnetic materials and nanoparticles is presented (K.R.A. Ziebeck, A. Ionescu and J. Llandro). Here, the aim is to provide a crash course on magnetism and the concepts and equations governing magnetic materials and experimental techniques. In Chapter 2 the synthesis of MNPs is described (C.J. Serna and co-workers). In this chapter a detailed practical guide is given on the best strategies for synthesizing MNPs. The relative advantages and disadvantages of each synthesis strategy are examined to enable the correct selection for the desired application.

In Chapter 3 on magnetic nanoparticle functionalization (J.J. Palfreyman) a comprehensive guide to coating and functionalizing MNPs and carriers for biomedical applications is provided. In Chapter 4 on manipulating MNPs (U.O. Hafeli, C.L. Chien and D. Fan) the application of nanoparticles in targeted medicine is reviewed. These applications include control of MNPs for the targeted delivery of therapeutics to sites of disease and magnetic hyperthermia.

Chapter 5 is on modeling the capture of MNPs from flow (N.J. Darton, B. Hallmark and D. Pearce). In this chapter a method of developing a robust model for predicting magnetic nanoparticle behavior in applied magnetic fields in the body is presented. In Chapter 6, sensing of MNPs by diverse magnetic sensors is described by the pioneers of their respective fields, i.e. Adarsh Sandhu *et al.* for Hall effect sensors, Paulo Freitas *et al.* for MR based sensors and Galina Kurlyandskaya for GMI sensors. This chapter details the underlying physical principles that affect the detection and imaging of MNPs in a number of biomedical sensing applications.

In Chapter 7 N. Lee and T. Hyeon investigate the design of nanoparticles for contrast agents in MRI. In this chapter the optimal properties of magnetic nanoparticles for application in the medical imaging area of MRI are described. Finally, in Chapter 8, magnetotactic bacteria are reviewed (D.A. Bazylinski and D. Trubitsyn). This chapter describes the occurrence of magnetic nanoparticles in nature and how these biological systems can produce endogenous magnetic nanoparticles.

*Article** courtesy of Nicholas J. Darton, Adrian Ionescu and Justin Llandro (Eds.)*

In most departments the training starts with courses on ‘Mathematical Methods for Physicists’, where students learn the basics of integration, divs and grads, urgently required in the first year curriculum. But the role of mathematics in physics transcends that of a collection of methods. At universities where this truth is reflected in the curriculum, conceptual teaching is often outsourced to departments of mathematics. After all, who would be better prepared to teach mathematical concepts than mathematicians themselves?

The above system works, otherwise it would not be implemented at a majority of academic institutions. The question is if we can do better. We believe yes, and that the key to a modernized and more pedagogical approach to teaching mathematics in physics lies in a *stronger integration of conceptual and methodological elements in the mathematics education of physicists by physicists*.

What we have in mind is best explained on an example, the introduction of *vectors* early in the curriculum: the average beginner’s course starts from a hands-on introduction of vectors in *R ^{n}*, with emphasis on

There is a better way of getting started. At the very beginning, invest two or so weeks into a systematic, bottom-up discussion of algebraic foundations — sets, groups, number fields, linear spaces. Students trained in this way ‘see’ groups and vectors everywhere, in functions, matrices, *R ^{n}* and

Similar things could be said about integration theory, vector analysis, (differential) geometry, and other key disciplines of mathematics – conceptual and systematic introductions are rewarding investments which quickly pay off in fast and sustainable progress of students. Our belief in this principle is backed by experience. We have taught the reformed lecture course underlying our textbook about 10 times at two universities. Students trained in this way generally showed higher levels of confidence and proficiency in mathematics than those who went through the standard system. Remarkably, average and weak students are among those who benefit most. For them, it becomes easier to understand connections otherwise seen only by the best of the class. It should also be stressed that emphasis of mathematical concepts does not imply more abstraction. Yes, it does leads to more ‘hygiene’ in notation and to a language appearing to be ‘more mathematical’ than what is standard in physics courses. However, these elements are anchored in intuitive explanations, and hence aren’t perceived as abstract. They support students’ understanding, including that of concurrent courses in pure mathematics.

Encouraged by our uniformly positive experience we suggest a teaching reform at large, not just at our own universities. This was the principal motivation for the substantial work we put into converting our course into a textbook. It is meant to provide a template for what we hope may become a more rewarding introduction to the mathematics needed in contemporary physics.

]]>Students engaging with thermodynamics have the opportunity to discover a broad range of phenomena. However, they are faced with a challenge. Unlike Newtonian mechanics where forces are the cause of acceleration, the mathematical formalism of thermodynamics does not present an explicit link between cause and effect.

Nowadays, it is customary to introduce temperature by referring to molecular agitation and entropy by invoking Boltzmann’s formula. However, in this book, the intrusion of notions of statistical physics are deliberately avoided. It is important to start off by teaching students the meaning of a physical theory and to show them clearly the very large preliminary conceptual work that establishes the notions and presuppositions of this theory. Punctual references to notions of statistical physics, which are not formally presented, give the impression that in science the results from another theoretical body of knowledge can be borrowed without precaution. By doing so, students might not perceive thermodynamics as a genuine scientific approach. It is clear that the introduction of entropy with a mathematical formula is somewhat reassuring. However, it is by performing calculations of entropy changes in simple thermal processes that students become familiar with this notion and not by contemplating a formula that is not used in the framework of thermodynamics.

This book is broken up into four parts. The first part of the book gathers the formal tools of thermodynamics, such as the thermodynamic potentials and Maxwell relations. The second part illustrates the thermodynamic approach with a few examples, such as phase transitions, heat engines and chemical reactions. The third part deals with continuous media, including a chapter that is devoted to interactions between electromagnetic fields and matter. A formal development of the thermodynamics of continuous media results in the description of numerous transport laws, such as the Fourier, Fick or Ohm laws and the Soret, Dufour or Seebeck effects.

At the end of each chapter, there are worked solutions that practically demonstrate what has been presented, and these are followed by several exercises. In the last part of the book, these exercises are presented with their solutions. Some exercises are inspired by physics auditorium demonstrations, some by research, for example: the melting point of nanoparticles, an osmotic power plant, a Kelvin probe, the so-called ZT coefficient of thermoelectric materials, thermogalvanic cells, ultramicroelectrodes or heat exchangers.

Thanks to the theory of irreversible phenomena which was elaborated in the period from approximately 1935 to 1965, thermodynamics has become an intelligible theory in which Newtonian mechanics and transport phenomena are presented in a unified approach. *Principles of Thermodynamics* demonstrates that thermodynamics is applicable to many fields of science and engineering in today’s modern world.

**Principles of Thermodynamics **by** **Jean-Philippe Ansermet , Sylvain D. Brechet is published by Cambridge University Press, January 2019 Purchase your copy in hardback or view online with institutional access from Cambridge Core

Hence, cosmological and astronomical observations strongly suggest that at large scales the force of gravity may not behave according to Einstein’s standard general relativity, and that a generalization of the Hilbert-Einstein action principle, either at the geometric level, or at the matter level, may be required for a full understanding of the gravitational interaction. One of the first such generalizations of the action principle for gravity goes back to 1970 in the form of the $f(R)$ gravity theory, in which the simple Ricci scalar of Einstein’s gravity is replaced with an arbitrary function of the curvature.

The book “Extensions of f(R) Gravity: Curvature-Matter Couplings and Hybrid Metric-Palatini Theory” by T. Harko and F. S. N. Lobo is focused on some recent extensions of the standard theoretical concepts in gravity and cosmology. In particular we examine in detail some recent modifications of gravity that tend to reevaluate the role of matter in gravity. In Einstein’s general relativity matter is a passive source of curvature, and therefore of gravity. But what if matter not only creates, but also dynamically couples to geometry? Then new terms would appear in the Hilbert-Einstein action, and their presence would significantly extend the range of theoretical models, as well as of the possibility of explaining the two most mysterious components of the Universe, dark energy and dark matter, respectively. There are several ways in which matter may couple to geometry. In our book we consider detailed descriptions of three such theories that involve the coupling between the matter Lagrangean and curvature (the $f\left(R,L_m\right)$ gravity), the trace of the energy-momentum tensor and curvature ($f(R,T)$ theory), and a combination of the metric and Palatini $f(R)$ formalisms called Hybrid-Metric Palatini Gravity theory.

These theoretical approaches open some new perspectives on the problems of the dark energy and dark matter. They can explain the late acceleration of the Universe, the galactic rotation curves or the mass deficit in galaxy clusters without resorting to the enigmatic (and undetected) dark energy and dark matter. The dynamics and structure of the Universe can then be understood only in terms of the complex interplay between ordinary matter and geometry, without any need of introducing exotic forms of matter into the fabric of the Universe.

In our book we have presented in detail the implications of gravity theories with matter-geometry coupling. In particular, we have shown how dark energy and dark matter can be explained by simple modifications of the gravitational action. But is this the final word on gravity? We hope that our book will serve as a reference for initiating and continuing state of the art research in the fundamental fields of modified gravity, dark energy and dark matter.

]]>Born in Prague in 1922, Emil’s early life was transformed by World War II. Fleeing ahead of the advancing Germans, he first ended up in Paris, where he worked as a bicycle courier for the Czech government in exile, and then retreated further to London. There he pursued his higher education, and earned his PhD in mathematics from Bristol University in 1948. In 1951, Emil was presented with the opportunity of a lifetime: the famous quantum physicist Max Born was looking for someone to help him write a new English version of his famed 1933 optics book *Optik*. Emil was recommended for the job, and he joined Born’s group to work on the project.

The book project would continue for a number of years, up until 1959, when the first edition of *Principles of Optics *was published. The book became known over the years as “The Optics Bible,” as it contains incredible detail on almost all aspects of physical optics. It contained one of the first descriptions of holography in a book, which made Dennis Gabor, the inventor of holography, very happy.

The book also contained the first detailed description of what is known as optical coherence theory, the merging of optics and statistics. His contributions to coherence theory are what Emil is most known for, and he is often referred to as the “Father of Coherence Theory.” All light sources possess some degree of randomness, and the light that they emit possesses random fluctuations. Lasers possess relatively small fluctuations, while light bulbs have lots of fluctuations. We do not see these fluctuations, because they happen much too fast; our eyes, and most detectors, only see the average properties of a light wave. These average properties can be described using statistical methods.

Before the work of Emil Wolf, it could be said that most researchers viewed the fluctuations of light as uninteresting “noise.” In 1954, however, Emil discovered and published a simple pair of optics equations, now known as the Wolf equations, which demonstrate that the statistical properties of a light wave also propagate as a wave. This was a demonstration that the fluctuations of light were not just noise to be accounted for, but a physical phenomenon worth studying in their own right.

The chapter on coherence theory was the last part of Principles of Optics to be written, and it was delaying publication; Max Born impatiently told Emil to leave it out of the book and send it off! Emil did not, of course, and this was the right decision: in 1960, the first laser was built, and coherence theory turned out to be crucial to understanding the nature of the light emitted by the novel light sources. In the end, Born was delighted that coherence was included in the book, and the chapter helped propel it into being the most important optics text for decades. In 1999, the 7th (expanded) edition of Principles of Optics was published by Cambridge University Press.

I became Emil’s student not long after he had made another fundamental discovery in coherence theory: the phenomenon of correlation-induced spectral changes, in which he demonstrated that the coherence properties of light can influence its spectral properties. I was struggling to decide what to do with myself in my PhD at the time, and my classmate Scott Carney (now the Director of the Institute of Optics in Rochester) suggested I talk to Emil. I went to see him, and I recall that the first thing he said to me was something like, “Well, you should keep in mind in working with me that I’m getting old and I could die at any time. However, my doctor says I’m in good health right now, so…” He then gave me a huge pile of his research papers, all from the last five years, which convinced me that he would be fine.

And he was! Emil continued to supervise students and do research for many years. His last paper, “Creating von Laue patterns in crystal scattering with partially coherent sources,” was published in 2016.

Emil had a long relationship with Cambridge University Press. In 1995, he coauthored *Optical Coherence and Quantum Optics* with his longtime friend and collaborator Leonard Mandel, which became an instant classic of the field. In 1999, the aforementioned 7^{th} edition of *Principles of Optics* was released, and I helped Emil work on the index, which game me an opportunity to enjoy his wife Marlies’ cooking, desserts, and company. In 2007, he finished *Introduction to the Theory of Coherence and Polarization of Light*, which serves as a great introduction to the often difficult to master subject.

I will always remember Emil as not only a scientist, but a friend and even a family member. He treated all his students like family, and I treasured those lunches, dinners, drinks and meetings where we would talk about science, society, and history. Emil’s life was filled with fascinating experiences, and he would share humorous stories at any opportunity. I always loved hearing these stories, even though – by his own admission – he would often repeat them. They never grew old. Everyone who worked with him would also often argue – even vehemently – about scientific problems. But Emil would always stress that these arguments were in good fun and that we remained friends, during and afterwards.

Emil Wolf was a great scientist, role model, and friend. He taught me how to be a scientist, and I owe any success that I have had to his careful mentoring. He has left behind world-changing research and a positive influence on all those who interacted with him.

He will be missed.

]]>The basic laws of physics constrain options for addressing real-world problems such as efficient energy storage, climate change, and the treatment of nuclear waste. An engineer, economist, manager, or politician with personal understanding of the science behind energy issues has a significant advantage in the marketplace of ideas. Thus when MIT established its energy curriculum, a core objective was to undergird the breadth and complexity of energy studies with a thorough foundation in the science behind the issues. This is why MIT’s energy studies curriculum begins with a course on the fundamental science of energy sources, uses, and systems. Implementing this goal was the challenge that led us to create our course on *The Physics of Energy*. Initially we had no intention of writing a book, but to our surprise, no existing textbook met our needs. Our students required a text that credited their knowledge of basic college math and physics, that covered the landscape of energy sources and uses broadly and at a consistent level, and that focused on the science uncomplicated by excursions into economics, regulation, and politics. A decade of teaching and writing lecture notes, with input from generations of students led finally to our newly published book, *The Physics of Energy*.

Our book covers the basic as well as applied science of energy. It presents a scientific foundation for engineering applications and a basis for informed discussions of economic, political, and regulatory issues. On subjects ranging from the limits on efficiency of solar cells to the flow of energy through climate systems, and on technologies ranging from wind turbines to air conditioners, you will find clear and concise presentations of the basic science and technology.

Are you studying to be an engineer or a designer? The devices and systems you create must acquire, store, transform, and utilize energy efficiently. Much of the engineering effort in producing smart phones, automobiles, refrigerators, and other products of the modern age goes into improving the way in which these devices process energy. The understanding of energy processes you will find in *The Physics of Energy* can provide a better appreciation of trade-offs or suggest new ideas for improved technologies.

Are you, like many of the students in our MIT class, working in the social sciences or a student of management or business? Competition for and exploitation of energy resources and technologies have given rise to some of the most intractable problems in the contemporary world. Externalities such as greenhouse gas emissions or radiation associated with energy production and conversion pose vexing problems for societies. From fossil fuels such as coal, oil, and natural gas to renewables such as solar, wind, and water power, energy resources and conversion systems are central considerations in social, economic, and political decision making. They will only become more important as we struggle with the inexorable effects of climate change that result from human energy usage. *The Physics of Energy* is designed to be the go-to reference for a scientific perspective on all of these subjects.

Are you studying to be a scientist and interested in the application of science to real world systems? Energy emerges as a unifying concept that underlies the evolution and structure of most physical systems. Energy governs the dynamical evolution of physical systems from the quantum world of photovoltaics to biological systems that have evolved in large part to optimize the gathering and utilization of energy from the environment.

Whatever your area of study, as you work on your next substantial research or writing project, you should ask questions like:

**‘What is the role of energy in the system?’**

**‘Are there other energy sources or other ways of processing energy that may be relevant here?’**

**‘Are there limits to how efficient the energy processes involved in these systems can be?’**

**‘How do energy choices in this domain affect economic and political spheres?’**

By asking these questions and forming a clear understanding of the role energy processes, limits and efficiencies play in whatever systems you are writing about, you can help frame the conceptual structure and logic of your presentation. Incorporating clear scientific understanding into your arguments will strengthen the effectiveness and clarity of your written work. While today’s energy systems can seem bewilderingly complex, understanding how these systems work begins with a clear foundation in the basic science of energy. Whatever systems or domain you are interested in, it is likely that energy plays a significant role. *The Physics of Energy* is the resource that lays out the basic principles to inform your work and provides a springboard to further research and study of the role of energy in many of the most fascinating and crucial questions of modern times.

The Physics of Energy, by Washington Taylor and Robert L. Jaffe is now available from Cambridge University Press. Read a free chapter here, and buy a copy

]]>The famous American physicist, Richard Feynman, was born a 100 years ago on the 11^{th} May in 1918, and it is worthwhile spending a few moments reflecting on what makes his achievements so enduring. To the general public, Feynman first became widely known with the publication in 1985 of a best-selling collection of stories from his life in physics called *‘Surely You’re Joking, Mr Feynman’*. The title refers to an incident in his introduction to graduate school at Princeton at an event called the ‘Dean’s Tea’. This was unfamiliar territory for Feynman who had grown up in Far Rockaway, a neighborhood in the New York City borough of Queens, and who had gone to MIT for his undergraduate degree. But it was Feynman’s participation in the presidential commission to investigate the Space Shuttle Challenger disaster in 1986 that made him one of the best-known physicists in the world. At a public meeting of the Commission, Feynman famously performed a public demonstration of the cause of the shuttle disaster using a rubber O-ring, a clamp and a glass of ice water. It is still worthwhile looking at the video of the event on YouTube .

To physicists, Feynman is revered for many reasons but is probably best known for the ‘Feynman diagram’ approach to calculations of quantum field theory. Feynman’s approach to field theory calculations was pictorial and in marked contrast to the more formal mathematical approach of his fellow Nobel Prize Winner, Harvard professor Julian Schwinger. As Schwinger later said:

*“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”*

“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”

Feynman diagrams are now an integral part of theoretical physics. Ironically, Freeman Dyson, the person who proved that Feynman’s intuitive approach was actually the same as Schwinger’s more mathematical approach, never won the Nobel Prize although he was instrumental in getting Feynman’s space-time approach accepted by people like J. Robert Oppenheimer and the rest of the physics elite.

When I was an undergraduate student in Oxford I first came across Feynman through his famous ‘Red Books’ – the three-volume set of his ‘Lectures on Physics’. Feynman dedicated two years of his life to creating a two-year introductory course in physics for Caltech students that covered most of modern physics – mechanics, kinetic theory, electromagnetism and quantum mechanics. Although many of the students reportedly found the lectures hard-going despite Feynman’s inimitable style of lecturing, his Lectures on Physics have become a staple part of the education of much of the physics faculty around the world.

After completing a D.Phil (the Oxford equivalent of a Ph.D.) in theoretical physics in 1970, I was excited to be awarded a Harkness Fellowship to go to Caltech for two years as a post doc. Just before I left Oxford, Feynman had published a paper on ‘partons’ – an intuitively appealing picture of the proton as made-up of point-like constituents. There was also great interest in the new experimental results from SLAC, the Stanford Linear Accelerator Centre, on ‘deep inelastic scattering’ of electrons from protons. Feynman had originally only applied his parton ideas to proton-proton scattering but on a visit to SLAC he had recently given a seminar showing how the new deep inelastic scattering results could be understood using his parton model of the proton.

I arrived at Caltech in 1970 feeling both trepidation and excitement and it was like moving from the slow lane to the fast lane on the freeway. At Oxford we had sort of absorbed the idea that the physics world revolved a little around Oxford, but at Caltech, it was clear that, to a first approximation, the UK, Europe and the rest of the world were largely irrelevant. This was the ethos of the theory group at Caltech with its two Nobel Prize winners, Richard Feynman and Murray Gell-Mann. In actual fact, my old professor in Oxford, Dick Dalitz, was one of the few physicists who had taken seriously the proposals by Gell-Mann, and independently, by George Zweig, then a professor at Caltech, for quarks as fundamental constituents of matter. Dalitz had developed a detailed quark model for baryons and mesons and showed that that this had remarkable power to reproduce many features of the hadron spectrum found by experiment. Despite its clear theoretical inconsistencies, Dalitz regarded his explicit quark model as similarly useful as Bohr’s equally inconsistent model of the atom. Just like Bohr’s model, Dalitz was convinced that the quark model pointed the way to some deep truths about Nature.

Feynman was never one to take other people’s calculations on trust and so he had developed his own version of the quark model with graduate student, Finn Ravndal, and post doc, Mark Kislinger. Perhaps because of his work with them, Feynman often used to have lunch with the graduate students and post docs at the Caltech campus cafeteria, universally known as ‘The Greasy’. It was here that I first heard versions of Feynman’s stories that he and fellow bongo drummer, Ralph Leighton, later wrote up for publication. The intellectual rivalry between Gell-Mann and Feynman was legendary and Gell-Mann frequently grumbled about what he regarded as Feynman’s ‘myth making’.

My most intimidating moment at Caltech was at an informal lunch-time lecture I had agreed to give to the experimental particle physicists. The group was led by new Nobel Prize winner, Barry Barish, with Frank Sciulli and they had just been awarded funding for an important experiment on deep inelastic neutrino scattering. Feynman’s parton explanation of deep inelastic electron scattering had been written up – with due acknowledgement to Feynman – by ‘BJ’ Bjorken and Manny Paschos who had both attended his lecture at SLAC. All I was going to do in my lecture was to explain how the parton model could be applied to neutrino scattering. However, you can imagine my surprise when I arrived to give my talk to see Feynman sitting in the audience. In fact, all went well until I was nearing the end of the lecture when Feynman jumped up and said:

* “Stop. Draw a line. Everything above the line is the parton model – below the line are just some guesses of Bjorken and Paschos.”*

As I rapidly became aware, the reason for Feynman’s sensitivity on this point was that Murray Gell-Mann was going around the Lauritsen building at Caltech growling things like *“Anyone who wants to know what the parton model predicts needs to consult Feynman’s entrails.”*. The point that Feynman was making was that all the results above the line in my seminar were identical to predictions that Murray had derived using fancier algebraic techniques. Feynman just wanted to dissociate his parton model predictions from some of the wilder parton model predictions of others. My lecture was just an opportunity for him to do that.

What made a Feynman lecture unique? The well-known Cornell physicist, David Mermin, once said “*I would drop everything to hear him give a lecture on the municipal drainage system”*. Why was this? An LA Times editor captured the essence of a Feynman lecture with the words:

*“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”*

“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”

The article went on to say:

*“No matter how difficult the subject – from gravity through quantum mechanics to relativity – the words are sharp and clear. No stuffed shirt phrases, no ‘snow jobs’, no obfuscation.”*

In his Nobel Prize lecture, instead of giving a talk about the beautiful Feynman diagram framework he had created, Feynman chose to show some of his miss-steps along the way to his eventual success:

*“That was the beginning and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know too much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held by this theory, in spite of all the difficulties, by my youthful enthusiasm.”*

What of Feynman’s legacy today? In 1981, at a conference at MIT, Feynman gave a lecture in which he asked the question *“Can physics be simulated by a universal computer?”* He then answered his question with the statement:

*“I’m not happy with all the analyses that go with just classical theory, because Nature isn’t classical, dammit, and if you want to make a simulation of Nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem.” *

Feynman then put forward an example of a quantum computer and now, over 35 years later, physicists and engineers all around the world are seriously trying to build and operate such a computer.

Finally, Feynman was always passionate about the need for what he called ‘utter scientific integrity’. In a commencement address to Caltech students in 1974 he said:

*“Learning how not to fool ourselves is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.” *

In his fine biography of Feynman, James Gleick memorably summed up Feynman’s philosophy towards science with the words:

*“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”*

“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”

Tony Hey

Kirkland, Washington

11^{th} May 2018

Richard Feynman write the Prologue and Epilogue to Tony’s book ‘The New Quantum Universe’ Second Edition, read these chapters for free on Cambridge Core.

]]>