The post Andromeda Galaxy at 100 first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>During beautiful evenings of late summer and autumn, you can observe in the constellation Andromeda what appears through binoculars as a small diffuse object. From a place free of light pollution, you can even see this strange object with the naked eye. The ancients had noted this. In his magnificent *Book of the Fixed Stars* published in 964, Persian astronomer Abd al-Rahman al-Sūfī (903-986) of Isfahan (Iran) identified it as the *Little Cloud*. In 1612, German mathematician Simon Marius became the first to observe the object using a telescope; he describes it as “the light of a candle shining through a horn, seen from a distance.” The Little Cloud is well known today, it is the Andromeda Galaxy.

French astronomer Charles Messier (1730-1817) was a comet hunter. He had listed the Little Cloud in his catalog as number 31. Messier warned that if one were looking for new comets, Messier 31 should not be confused with a comet. At 2.5 million light years, it is the most distant object that can be seen with the naked eye.

*Hubble cuts the Gordian knot*

How did Edwin Hubble measure the distance to the Andromeda Galaxy? It was from the recognition and observation of variable giant stars of the Cepheid type, evolved stars which can be 100,000 times more luminous than the Sun. Pulsating Cepheids have a characteristic variability which makes them a reliable standard candle, that is to say they can be spotted at great distances and compared to nearby examples for which the distance has been directly established. In 1923 and 1924, Hubble succeeded in photographing Cepheids in a few “nebulae”, using the telescopes of the Mount Wilson Observatory in California; he established the distances of a few dozen of these Cepheids which he presumed were associated with Messier 31. On November 23, 1924, the *New York Times* reported on the discovery. The official announcement took place in Washington D.C. at the annual meeting of the American Astronomical Society in late December. Hubble reported his results in two very brief articles at the beginning of 1925: he placed Messier 31 at least 930,000 light years from the Sun, therefore clearly outside the Milky Way [1].

Cepheids had been photographed in Messier 31 by a few astronomers as early as 1917. But it is the merit of Hubble to have clearly recognized them as such in 1924. Something can be seen several times before being discovered! Without fanfare, Hubble put an end to an age-old debate – that between the defenders of the local hypothesis of nebulae, claiming that they were located within a super Milky Way, versus that of the proponents of the extragalactic nature of the majority of nebulae.

The reason for the divergence between these two camps? For several centuries, astronomers were unable to elucidate the nature of nebulae. Were they diffuse clouds of ethereal substance or independent star systems like the Milky Way? An exact determination of their distances was a necessary step to resolve the question. We now know that a minority, like the Orion Nebula, are real gas clouds; the majority, are immense independent star systems external to our Milky Way.

*Some precursors*

As early as 1917, American astronomers George Ritchey (1864-1945) and Heber Curtis (1872-1942) had discovered nova type stars in a few “nebulae” including Messier 31. A nova is produced by the nuclear detonation of material ejected from a red giant star falling on its companion white dwarf. It appears at its brightest during the explosive phase, then disappears within weeks. The pattern of their variability, however, was too irregular and unpredictable to make them good standard candles. However, Ritchey and Curtis could say that if they appeared thousands of times fainter than the novae in the Milky Way, that meant they were well beyond the Milky Way.

It was at first novae that Hubble was looking for. His surprise was to observe a recurring variable star in Andromeda, one which did not disappear after a few weeks; he had first noted it as a nova. In early 1924, he determined that it obeyed the cycle of brightness variation characteristic of Cepheids. Astronomer Henrietta Leavitt (1868-1921) working at the Harvard College Observatory had established in 1912 that there was a close relationship between the luminosity and the period of variability of Cepheids in the Magellanic Clouds; the more luminous the star, the longer its period – period varying from a few days to 50. It was therefore a question of establishing the period to derive the luminosity, and from there, the distance. Like novae, Cepheids observed in nebulae appear fainter than those of the Milky Way, because they are much more distant. Having measured their periods, Hubble derived their luminosities, and by a simple rule of three, he deduced the distance to Messier 31.

*Expansion of the universe in two stages*

In 1952, German astronomer Walter Baade (1893-1960) identified two types of Cepheids with distinct luminosities. Hubble had observed the most luminous ones but was referring to the least luminous ones. Baade’s discovery resulted in a roughly two-fold increase in the distances initially derived by Hubble. Today the distance to the Andromeda Galaxy is established by a set of indicators at 2.5 million light years.

The mid 1920s witnessed an avalanche of discoveries and proposals which completely transformed our vision of the universe. In 1924, also 100 years ago, the young Russian physicist Alexander Friedmann (1888-1925) re-examined Albert Einstein’s equations of general relativity of 1915. Unlike Einstein’s stationary solution, Friedmann demonstrated that space-time is unstable; just like you can’t make a pencil stand on its tip, space is either contracting or expanding. Unaware of his Russian colleague, Belgian cosmologist Georges Lemaître (1894-1966) found the same solutions, but he went much further than Friedmann. In 1927, on the basis of preliminary data on the distances and speeds of a few dozen galaxies, Lemaître concluded that the universe was expanding. Then in a flash of genius, Lemaître put the expansion in reverse. In 1931, in a short publication in *Nature*, he asserted that everything that exists was born in an extremely small and hot quantum packet a few billion years ago. This was the “primeval atom” hypothesis, which today has become the big bang theory.

Delayed in his studies by the Great War, Hubble received his PhD from the University of Chicago in 1921; his thesis subject was “Photographic Investigations of Faint Nebulae”. Working at the Mount Wilson Observatory since 1919, Hubble had access to its two large telescopes of 1.5 m and 2.5 m. Surprisingly Hubble was not considered a good observer; his colleagues noticed, for example, that for several of his photos the focus was not optimal. He was nevertheless methodical, proud though bashful about the contributions of others. He is sometime attributed with the discovery of the expansion of the universe through work with his meticulous colleague and peerless observer Milton Humason (1891-1972). Together, they establish the systematic relation between recession velocities and distances for tens of galaxies. However, Hubble remained doubtful of the interpretation until his death. Alan Sandage (1926-2010), Hubble’s colleague for years, was categorical: Hubble never believed in the reality of expansion.

While on a large scale the expansion of the universe carries away galaxies like floating ice, gravitational attraction continues to dominate at distances of a few million light years or less. Galaxies can thus assemble in small groups or form clusters of several thousand galaxies. They can also fall on top of each other and merge. This is what will happen to the Milky Way and the Andromeda Galaxy in 4 or 5 billion years; our two spirals will collide to form a large elliptical galaxy.

Until then, we can affirm that 100 years ago we discovered the world of galaxies, its immensity and its strange dynamics. It is striking to remind ourselves that this discovery happened so recently; for example, my grandparents born just a few years before Edwin Hubble were his contemporaries!

[1] Edwin P. Hubble, “Cepheids in Spiral Nebulae”, *The Observatory*, 1925, vol. 48, p. 139-142; also in* Popular Astronomy*, 1925, vol. 33, p. 252-255.

The post Andromeda Galaxy at 100 first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Myths and Open Questions of Quantum Mechanics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>- The Planck radiation spectrum and the photoelectric effect tell us nothing at all about whether particles like little billiard balls exist. Both experiments are well understood as wave phenomena.
- Although quantum jumps can be very fast, there is no evidence that they are ever instantaneous. Straightforward calculations give the time scale for quantum transitions.
- Some definite, physical measurements can be made that require a system to be in a superposition of different particle numbers.

The above, and other considerations discussed in the book, lead to the view of particles as “epi-phenomena,” that is, concepts derivable from a deeper underlying reality, namely the quantum field. Therefore, the common dichotomy of “wave-particle duality” is wrongheaded.

While “wave-particle duality” is not a fundamental mystery, there is still much that is mysterious. In particular, how do we account for non-local correlations, when a detection event at one location seems to influence what happens at another detector far away? In the book, I review in depth some of the major rival interpretations of quantum mechanics, namely the Copenhagen interpretation, the many-worlds (Everett-Wheeler) interpretation, and spontaneous collapse.

The Copenhagen interpretation has received much critique over the decades, which I review. However, the many-worlds view seems to be rising in influence, without the same degree of critique—it seems that the main critique is often simply that it has bizarre implications. In this book I offer an extended discussion of problems of the many-worlds from a scientific perspective. Among other things, I argue that the many-worlds approach doesn’t actually help with the nonlocality problem. Instead of the results of specific outcomes of detection events having nonlocal effects, in the many-worlds approach the definition of a set of basis states by a detector (called “einselection” by Zurek) is propagated nonlocally.

Spontaneous collapse theories are often not taken seriously, but they are still viable. In the book I present a new version that is consistent with experiments, which ends up looking just like a set of weak measurements. While extended critique of this model has not yet occurred, it has been fleshed out enough to not be obviously wrong.

Overall, the book takes the perspective that the quantum field is real (as real, say, as water waves), and not just a construct. The first 120 pages of the book have no equations and can be read by a non-physicist. Several of the later chapters require only freshman/sophomore-level physics. I also include a basic introduction to quantum field theory for the non-expert, and several results of modern decoherence theory, which relates to measurement theory.

The post Myths and Open Questions of Quantum Mechanics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Quantum measurement book blog first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Measurement is one of the most fascinating and misunderstood aspects of quantum physics. It plays no role in classical physics, other than reducing ignorance about the underlying reality. In quantum physics measurement plays a fundamental role, and the choice of what kind of measurement you choose to do is necessary in order to make scientifically testable predictions within the quantum formalism. The *collapse of the wavefunction* has been an intriguing and dramatic part of quantum theory for almost a century. We delve into this phenomenon and gradually unpack it as the book progresses, as a testable scientific phenomenon.

**Why did we write this book?**

The simple fact of the matter is that our colleagues kept asking us to do so. This is because there has been a tremendous amount of progress in the field (both theory and experiment) which only a small fraction of scientists seem to know about. Together with our students and colleagues, we have been privileged to be part of this process of scientific discovery. There is a real gap in the physics book literature that covers these recent advances. This situation convinced us that a new book on this topic was both timely and needed. It is our hope that by writing a comprehensive and readable text aimed at scientists and students that know some quantum physics will help educate the current generation and the ones that follow about this fascinating and deep topic in quantum physics. Our fondest hope is that this textbook will benefit our community and help to educate and enlighten, as well as serve as a definitive reference on this subject.

**What will you learn?**

Non-experts in the field can profitably read the first and last chapters. There we discuss both the history of the field of quantum physics leading up to the current quantum information age, as well as cover some of the philosophical implications and controversies of the foundations of quantum physics. We give our own point of view on these profound subjects and touch on some other perspectives as well, speculating on where the field is going and what the next break-thoughts will be. For readers with some knowledge of quantum physics, we cover selected topics in this theme of quantum measurement we believe are the most important. Topics such as weak measurements, weak values, continuous quantum measurement (in both their diffusive and jump flavors), and feedback control are covered as fundamental and advanced topics. We focus first on motivating experiments, and only after understanding the detailed phenomena give a mathematical description. We spent a lot of the book focusing on how measurements are actually done, which will be of particular interest to experimental physicists. We discuss a variety of physical realizations, how to make amplifiers, and what limits quantum physics puts on them. We then go into many of the fascinating phenomena that arise in weak and continuous measurements, such as measurement reversal, entanglement by measurement and the joint measurement of non-commuting observables, ideas thought to be impossible until fairly recently! On the formalism side, readers will learn how to predict the probabilities of outcomes of generalized measurements, and how to assign new, post-measurement, quantum states to correctly describe the statistics of subsequent measurements. For continuous measurements, we cover different formal approaches to describe the trajectories the state undergoes during continuous monitoring, including the stochastic Schrödinger and master equations, complete with a discussion of stochastic calculus, as well as the stochastic path integral formalism.

We end this blog post with a little quiz: If you can’t answer some of these questions, it is a great reason to read our book and learn the answers!

How long does a measurement take?

Can a measurement be reversed?

Is it possible to track the quantum wavefunction collapse in time?

How can one entangle separated objects via measurement?

When does a state jump versus diffuse?

What is the most likely path of a quantum trajectory?

How can one describe joint unitary and non-unitary processes as a dynamical system?

What limits does quantum mechanics put on amplification? How does one build a quantum limited amplifier?

Title: Quantum Measurement

Authors: Professor Andrew N. Jordan and Professor Irfan Siddiqi

ISBN: 9781009100069

The post Quantum measurement book blog first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Theory of liquids, the hard problem first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>My surprise quickly grew closer to an astonishment, for two reasons. First, the heat capacity is one of the central properties in physics. Constant-volume heat capacity is the temperature derivative of the system energy, the foremost property in physics including statistical physics. Heat capacity informs us about the system’s degrees of freedom degrees of freedom and regimes in which the system is in, classical or quantum. It is also a common indicator of phase transitions, their types and so on. Understanding energy and heat capacity of solids and gases is a central and fundamental part of theories of these two phases. Thermodynamic properties such as energy and heat capacity are also related to important kinetic and transport properties such as thermal conductivity. Not having this understanding in liquids, the third basic state of matter, is a glaring gap in our theories. This is especially so in view of enormous progress in condensed matter research in the last century.

The second reason for my surprise was that the textbooks did not mention the absence of discussion of liquid heat capacity as an issue. It is harder to solve a problem if we don’t know it exists.

The available textbooks are mostly concerned with liquid structure and dynamics. They do not discuss most basic thermodynamic properties such as liquid energy and heat capacity capacity or explain whether the absence of this discussion is related to a fundamental theoretical problem. Textbooks where we might expect to find this discussion but don’t, include those dedicated to liquids, advanced condensed matter texts and statistical physics textbooks. This list has a notable outlier: the Statistical Physics textbook by Landau and Lifshitz. Landau and Lifshitz discuss general thermodynamic properties of liquids and explain why they *can not* be calculated, contrary to solids and gases. The reason is the combination of strong interactions and dynamical disorder. Strong interactions mean that gas theories are inapplicable to liquids. Dynamical disorder and the absence of fixed positions imply that the harmonic approximation used in the solid state theory is seemingly inapplicable to liquids even as a rough approximation. This precludes using our well-developed theories designed for gases and solids or their modifications including the perturbation theory. Therefore, Landau and Lifshitz conclude, general results for thermodynamic properties of liquids are impossible to derive. As aptly summarised by Pitaevskii, liquids do not have a small parameter (differently to solids and gases).

The problems listed by Landau, Lifshitz and Pitaevskii are fundamental. They explain why liquids have not been understood at the level nowhere near close to solids and gases. In view of enormous progress of condensed matter research, it is perhaps striking to realise that we do not have a basic understanding of liquids as the third state of matter and certainly not on par with solids and gases. This was one of the reasons I decided to look into this problem.

A set of new results have emerged in the last few decades related to collective excitations in liquids, phonons. It has taken a combination of experiment, theory and modelling to understand phonons in liquids well enough to connect them to liquid thermodynamic properties. Recall that this connection between phonons and thermodynamic properties was the basis of the Einstein and Debye approach to solids which laid the foundations of the modern solid state theory. The upshot is that the liquid theory can still be constructed on the basis of phonons, but the key point is that, differently from solids, the phase space available to phonons is not fixed but is variable instead. In particular, this phase space reduces with temperature. This reduction quantitatively explains the experimental liquid data and in particular the reduction of liquid specific heat from the solidlike to the ideal gas value with temperature. It has taken another decade to obtain an independent verification of this theory.

The small parameter in the liquid theory is therefore the same as in the solid state theory: small phonon displacements. However, in important difference to solids, this small parameter operates in a variable phase space. This addresses the problems stated by Landau, Lifshitz and Pitaevskii above.

Once looked from a longer-term perspective, the history of liquid theory and in particular the history of collective excitations in liquids reveals a fascinating story which involves physics luminaries and includes milestone contributions from Maxwell in 1867, followed by Frenkel and Landau. A separate and largely unknown line of enquiry aiming to connect phonons to liquid thermodynamics involved the work of Sommerfeld and Brillouin over 100 years ago. This was around the same time when the papers by Einstein and Debye were published and laid the foundations of the modern solid state theory based on phonons. Developing this line of inquiry in liquids had largely stopped soon after, and theories of liquids and solids diverged at the point of a fundamental approach. Whereas the solid state theory continued to be developed on the basis of phonons, theories of liquids started to use the approach based on interatomic interactions and correlation functions. As discussed in this book in detail, this approach faced several inherent limitations and fundamental problems.

Mathematics needed to discuss liquid theory is similarly interesting. For example, when we get to the equation written (but not solved) by Frenkel to describe phonons in liquids, we find that this equation was introduced by Kirchhoff in 1857 and discussed by Heaviside and Poincare.

This book reviews this research, starting with the early work by Sommerfeld and Brillouin and ending with recent independent verifications of the liquid theory. I follow the variation of the phase space in liquids in a wide range of parameters on the phase diagram, from low-temperature liquids to high-temperature supercritical fluids. I then come back to low-temperature viscous liquids approaching liquid-glass transition.

I also show how developments in liquid theory resulted in new unexpected insights. For example, the variation of the phase space available to phonons is related to liquid viscosity which quantifies the ability to flow. Viscosity has a minimum related to the crossover of particles dynamics from liquidlike to gaslike. It turns out that this minimum is governed by fundamental physical constants including the Planck constant. I show how this provides an answer to the question asked by Purcell and considered by Weisskopf in the 1970s, namely why viscosity never drops below a certain value comparable to that of water? I also show that viscosity minimum set by fundamental constants implies that liquid-based life (water-based life in our world) is well attuned to fundamental physical constants including the degree of quantumness of the physical world.

The liquid theory and its independent verifications discussed in this book focus on real liquids and their experimental properties rather than on model systems (such as hard-spheres and van der Waals models). This importantly differentiates this book from others.

The selection of topics in this book is helpfully aided and narrowed down by adopting a well-established approach in physics where an interacting system is fundamentally understood on the basis of its excitations. Consequently, a large part of this book discusses collective excitations in liquids and their relation to basic liquid properties throughout the history of liquid research. This shows how earlier and more recent ideas physically link to each other and in ways not previously considered.

In his well-known book “Gases, liquids and solids”, Tabor calls liquids “neglected step-child of physical scientists” and “Cinderella of modern physics” as compared to solids and gases. Although this observation was made nearly 30 years ago, Tabor would have reached the same conclusion regarding liquid thermodynamics on the basis of more recent literature. An important aim of this book is to make liquids a full family member on par with the other two states of matter, if only more sophisticated due to the liquid ability to sustain a variable phase space.

This book reaches out to scientists at any stage of their career who are interested in the states of matter and a history of a long-standing problem of understanding liquids theoretically. The second group are researchers and graduate students working in the area of liquids and related areas such as soft condensed matter physics and systems with strong dynamical disorder. The third group are lecturers looking to include liquids in the undergraduate and graduate courses such as statistical or condensed matter physics as well as students who can use this book as a reference.

The post Theory of liquids, the hard problem first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The second edition of my textbook “*Numerical Methods in Physics with Python*” was published by Cambridge University Press in July 2023. Already since its first edition, the book’s focus was pretty clear: foundational numerical methods were derived from scratch, implemented in the Python programming language, and applied to challenging physics projects. That first edition was published less than three years ago, so it may be worthwhile to see how the updates came about (and thereby also explain why a second edition was warranted). Over the last several semesters, I have been fortunate enough to teach both undergraduate and graduate courses on computational physics out of my textbook (repeatedly); thus,

“the changes to the book between editions have been directly driven by what worked in the classroom (and what didn’t)”.

The undergraduate course (renditions) revolved around a subset of the numerical methods/codes in the first edition, but I found that some further topics needed to be introduced, most notably on linear algebra (singular-value decomposition), optimization (golden-section search), and partial differential equations (finite-difference approaches). Perhaps even more crucially, given the heavy emphasis on math and programming in the undergrad version of the course, some students were left wanting more of a physics bent: to address that need, I created a large number of problems on physical applications both on standard themes and on topics that I have not encountered in other computational-physics textbooks (e.g., the BCS theory of superfluidity, the Heisenberg uncertainty relation, or the stability of the outer solar system). In each case, the idea was to complement the end-of-chapter (worked-out) Projects with (sometimes short, other times fairly extensive) problems showing how the numerical methods (and programming skills) developed in a given chapter can be put to use when studying physics. When introducing these new physical themes into the second edition, I sometimes found it natural to split them across chapters; to give but one example, the band gaps of solid-state physics are successively studied as a plotting, linear-algebra, root-finding, minimization, and integration problem.

The graduate course (incarnations) that I taught also necessitated new physics problems: perhaps unsurprisingly, these were closer to modern-day research (e.g., scalar self-interacting field theory, the gravitational three-body problem, or the optical Bloch equations). Given that the intended audience here was more advanced, I also worked out from scratch things that are usually taken for granted (e.g., the minimax property of Chebyshev polynomials or asymptotic normality) when not passed over in silence (e.g., the computation of complex eigenvalues or an iterative approach to the fast Fourier transform). Turning to the lectures: these typically focused on the most equation-heavy numerical methods from the first edition; I supplemented them with new material on many-dimensional derivative-free optimization as well as nonlinear regression. The latter led me to the hot topic of artificial neural networks (the power of which is exemplified by the accompanying plot). Speaking of regression, the single most important change in the second edition is a new section on statistical inference (which somehow manages to be both concise and lengthy): this starts out by recovering/justifying first-edition results (e.g., regarding the interpretation of the chi-squared statistic) before turning to the Bayesian approach, uncertainty bands, etc. While writing this new section I realized that the discussion of data analysis in many introductory (or not so introductory) textbooks is questionable, as summarized in *this spin-off journal article*.

A crucial aspect of the first edition was the inclusion of dozens of complete Python implementations of numerical methods or physical applications. The (six) new sections in the second edition have also led me to write six new codes, which are given at the companion *website* and discussed in gory detail in the main text. The fifteeneightyfour blog post I wrote when the first edition of the textbook came out (*see What’s wrong with black boxes?* ) goes over the motivation behind and significance of the codes. In the same spirit of working things out from scratch, the codes are further probed in the (140) new end-of-chapter problems. Speaking of which, typically computational-physics textbook authors either produce no solutions to the problems or provide solutions only to instructors teaching for-credit courses out of the textbook. I have followed the latter route, providing complete solutions of all programming problems to instructors; these are locked, since course instructors would not be able to assign them as homework problems otherwise. Even so, at *the companion website* I’m also providing a subset of the solutions to all readers, as a self-study resource.

In addition to the new sections, codes, problems, and solutions discussed above, while putting together the second edition I took the opportunity to read through the entire book multiple times and thoroughly tweak it, with a view to making the work more student-friendly. This ranged from introducing new footnotes or figures, to complete rewrites of first-edition sections, all the way to revamping the index. I have certainly enjoyed navigating this book’s wine-dark sea; perhaps you will, too.

Title: *Numerical Methods in Physics with Python*

Author: Alex Gezerlis

The post Computational physics gets a revamp first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Publication metrics don’t have to drive academia first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Today, researchers are publishing more than ever before. New assistant professors have already published twice as much as their peers did in the early 1990s to secure a position in top departments or to achieve tenure. Nobel laureate Peter Higgs believes he wouldn’t be deemed “productive” enough for academia in today’s world. However, merely publishing more papers doesn’t cut it. The number of citations those papers receive is the true currency in science.

The Journal Impact Factor (JIF) rules supreme, determined by averaging the citations received by papers in a specific journal over the previous two years. In the US and Canada, 40% of research-intensive institutions mention the Journal Impact Factor in their assessment regulations. In Europe and China, the JIF is also employed as one of the key metrics. Eugene Garfield, the creator of the JIF, likened it to nuclear energy: beneficial when used properly but often misused. To be honest, what we witness nowadays is a rampant abuse of metrics in academia.

**Two blind spots**

*The Evaluation Game: How Publication Metrics Shape Scholarly Communication* wants to make scholars and policymakers aware of two key blind spots in the discourse on publication metrics. The first one is the absence of the Soviet Union and post-socialist countries in the histories of measuring science and evaluating research. The other blind spot is a lack of geopolitical perspective in thinking about the contexts in which countries face the challenges of publish or perish culture**.**

Counting scholarly publications has been practiced for two centuries. In Russia from the 1830s, professors had to publish yearly to determine their salaries. The Soviet Union and various socialist countries developed national research evaluation systems before the Western world. The effects of those practices are still vital.

**Designing better metrics is not enough**

I wrote *The Evaluation Game* to offer a fresh take on the origins and effects of metrics in academia, as well as to suggest ways to improve research evaluation. The book reveals that simply designing better and more comprehensive metrics for research evaluation purposes won’t be enough to halt questionable research practices like the establishment of predatory journals, guest authorship, or superficial internationalization, often seen as „gaming” the research evaluation systems. It’s not the metrics themselves, but the underlying focus on economics that’s driving the transformation of scholarly communication and academia itself.

With this book, I aim to demonstrate that a deeper understanding of the reasons behind the transformation of research practices can guide toward better solutions for governing academia and defining the values that should shape its management. This is a crucial task today, as pressures on academia continue to mount and more countries are either implementing or considering the introduction of national evaluation regimes.

My hope is that this book can help us gain a better understanding of the role that measurement and research evaluation play in science. It’s impossible to conduct publicly funded research without some form of evaluation (either ex-post or ex-ante). Given this reality, it’s essential for us to ask how we can influence science policy and develop more responsible technologies of power.

Title: *The Evaluation Game*

How Publication Metrics Shape Scholarly Communication

Author: Emanuel Kulczycki

ISBN: 9781009351195

The post Publication metrics don’t have to drive academia first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Black Holes and Galaxies first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>A vast body of observations now shows that black holes are not simply a theoretical possibility, but have central importance in the real universe. Gas infall – *accretion* – on to a black hole is the most efficient way there is of getting energy from ordinary matter. Accretion typically converts about 10 percent of the rest-mass energy of the infalling matter to outgoing radiation, while the process that makes most stars shine – fusing hydrogen nuclei to make helium – yields only 0.7 percent.

It follows that black hole accretion must power the most luminous astrophysical sources at every mass scale. The discovery of the huge power output of quasars led astronomers to suggest that these objects contained supermassive black holes (SMBH) with masses about one hundred million times that of the sun, somehow gaining gas at high rates from an unknown source. By now we know that almost all galaxies have a supermassive black hole at their centres, and quasars are simply the phase when the hole mass is growing most rapidly, and so producing the most luminosity.

Not long after these insights, astronomers realised that a scaled–down version of the same process, with black holes of masses only a few times that of the sun, was a likely explanation for powerful X–ray emission from some binary star systems in the Milky Way. Here the source of the accreting matter is easier to understand as mass falling into the hole from the companion star. This picture related the accretion making the X-rays in these systems closely to the evolution of the two stars in the binary system. Before the X-ray stage, one of these stars must have finished its evolution by becoming a black hole, while the other eventually transfers gas to this black hole as it exhausts the hydrogen in its centre and expands to become a giant star. This brings its surface closer to the black hole, where it can be captured by its gravity. X-ray binary systems often have conveniently observable timescales – periodic orbits of a few days or even a few hours – and this explains why most progress in understanding accretion has until recent years come largely from studying them.

By comparison, our understanding how the supermassive black holes in the centres of almost all galaxies developed much more slowly. For many years there was an implicit assumption that the growth of supermassive black holes was simply a scaled-up version of the growth of stellar-mass black holes in binary star systems. This left many aspects obscure, such as how the gas need to grow the central supermassive black holes could be induced to fall from distant part of the galaxy towards a gravitationally insignificant object in their centres.

But in the last twenty years there has been a revolution in the study of supermassive black holes and their relation with their host galaxies. Astronomers found that the masses of most central supermassive black holes are directly related to two large-scale properties of their hosts. The total mass of the galaxy’s central spherical bulge of stars and interstellar gas is almost always about one thousand times the hole’s mass, and the way that stars and gas move in the bulge is also specified to great accuracy by this same mass. Since the hole’s mass is so small compared with the bulge, this cannot simply be orbital motion controlled by the hole’s gravitational pull. The discovery of these *scaling relations* has transformed our understanding of supermassive black holes and galaxies.

My new book, *Supermassive Black Holes, *discusses this revolution in detail. When the black hole mass is below the value specified by the scaling relations, it is evidently able to absorb all the gas falling close to it, and grow. But at the critical mass specified by the scaling relations, the radiation produced by accretion of just a small part of this gas drives the remainder outwards instead, colliding with the gas in the host galaxy’s central bulge. This process eventually sweeps all the bulge gas far away from the galaxy in spectacular outflows. The galaxy settles to a slow decline in activity, becoming `red and dead’ – its central supermassive black hole no longer produces much radiation, and the lack of gas in its bulge means that there is no vigorous formation of new stars which would make the bulge bright and blue.

These processes have fundamental consequences for the growth and evolution of the central supermassive black holes, and my book explores these in detail. Together, these insights are bringing astronomers closer to an understanding of the internal workings of galaxies, and ultimately of how the Universe came to be as we now observe it.

Title: Supermassive Black Holes

Author: Andrew King

ISBN: HB – 9781108488051

The post Black Holes and Galaxies first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Relativity applications in radiation and plasma physics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>Some of Einstein’s and other original thought experiments are now able to be reproduced. For example, the “twin paradox’’ experiment can be replicated by flying accurate atomic clocks. In the twin paradox thought experiment, a twin embarks on a space voyage and accelerates away and then back to earth, returning much younger than their earthbound sibling. In an appendix to my book “An introduction to special relativity for radiation and plasma physics’’, I explore a twin paradox thought experiment where a twin departs aged 20 and returns at age 40, but on earth several hundred years have elapsed. With atomic clock experiments the time differences between accelerated clocks and earthbound clocks is small, but measurable (see, for example, https://en.wikipedia.org/wiki/Hafele%E2%80%93Keating_experiment).

It is now possible to perform Einstein’s thought experiment where light is reflected from a rapidly moving mirror. Plasmas created by focusing intense, short pulse lasers onto solids reflect light at a critical density of plasma that moves rapidly away from the original solid surface at speeds approaching a significant fraction of the speed of light. The light reflection follows Einstein’s relativity predictions (see Chapter 9 of “An introduction to special relativity for radiation and plasma physics’’). My book ‘An introduction to special relativity for radiation and plasma physics’ introduces special relativity and explains many of the applications of relativity in plasma physics and radiation sources. The book reviews the underlying theory of special relativity, before extending the discussion to applications frequently encountered by postgraduates and researchers in astrophysics, high power laser interactions and the users of specialized light sources, such as synchrotrons and free electron lasers. I wrote the book because I felt that laser-plasma researchers and users of radiation sources needed a text at an appropriate level that explains relativity and its applications

**Title:** An Introduction to Special Relativity for Radiation and Plasma Physics

**Author**: Greg Tallents

**ISBN**: 9781009236065

The post Relativity applications in radiation and plasma physics first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post What are Effective Field Theories? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>But this result is
not exact, there are further corrections from the electromagnetic interaction as
well as from the proton structure, through the reduced mass of the 2-body
system as well as the proton size. All these corrections are given in powers of
(m_{e}α)^{2} and m_{e}/m_{p , }with
m_{p} the proton mass, and can be systematically
calculated in an Effective Field Theory (EFT).

This method can be extended to systems and reactions, where exact solutions like in the case of the hydrogen atom are not known. The strong interactions are the main playground for such EFTs as they are incredibly difficult to solve, even with today’s biggest supercomputers. The basic idea of such an EFT is borrowed from quantum field theory, namely an expansion in tree graphs, one-loop diagrams, two-loop diagrams and so on. However, in contrast to the commonly used expansion in a small coupling constant, here one expands in small momenta and/or particle masses, divided by some large scale Λ, at which the EFT is no longer applicable. To make this work, a scale separation between the low-energy modes and the high-energy modes ~ Λ is mandatory.

Equally important are the symmetries and their realization for the system under consideration, as this severely limits the possible operators in the EFT. A simple example is parity invariance, which requires that a momentum operator **p **appears only in even powers like **p**^{2}, **p**^{4}, … in the EFT. Finally, one has to work with the relevant degrees of freedom, which for the strong interactions at low energies are pions and nucleons, not the fundamental fields, the quarks and gluons.

In our book, we discuss in detail the foundations of such EFTs of the strong interactions, with an emphasis on the sector of the light quarks: up, down and strange. Selected examples are worked out in detail that show how this powerful machinery works. We believe that this book fills a gap in the available literature and would welcome comments by our readers.

Title : Effective Field Theories

ISBN: 9781108476980

Authors: Ulf-G Meißner and Akaki Rusetsky

The post What are Effective Field Theories? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post Q&A with Chris Jay Hoofnagle & Simson L. Garfinkel, authors of ‘Law and Policy for the Quantum Age’ first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>**Q: What is the “quantum age?”**

The quantum age is now! It is the world that we are in, which is defined at a very fundamental level by our knowledge, understanding and control of quantum phenomena.

Much of our understanding of the quantum world dates back Albert Einstein and something called the photoelectric effect, which Einstein figured out in 1905. The photoelectric effect demonstrates that light arrives as a stream of things that have a precise, indivisible energy. That is, the energy in light is quantized. Today we use the word “photon” to describe these quanta. It’s for this that Einstein received the 1921 Nobel Prize—not for his work on relativity.

So we’ve been living in the quantum age for more than a hundred years.

What’s new—the reason that we’ve written this book—is that increasingly scientists and engineers are figuring out how to use quantum effects to make ultra-precise measurements, to perform simulations and computations, and to create uncrackable communications systems. Broadly, this is quantum sensing, quantum computing, quantum “encryption” and quantum networking. Our book explains these different technologies and discusses why they are important.

**Q: I didn’t take physics in high school or college. Can I understand this book?**

We wrote this book to be understandable by lawyers and policy makers—people who have a general understanding of the world around us and perhaps still remember their high school algebra. One section of the book introduces the quantum physics aspects, but the descriptive and policy sections stand on their own.

**Q: Why does quantum information science matter to lawyers?**

Many companies are spending significant amounts of money now to prepare for using quantum technologies. These companies need both technical and legal advising. Just as today’s lawyers need to understand about the internet, encryption, and search engines, lawyers increasingly need to understand about quantum computing, quantum encryption and particularly quantum sensing. Quantum sensing will present the deepest challenges to civil liberties, as devices may let us literally see through walls and read people’s thoughts.

This is also great time for attorneys that are interested in policy to start engaging with these topics.

**Q: Why does quantum information science matter to policy makers?**

For policy makers, the time to engage on quantum is now! There are major quantum research and development initiatives happening right now in the US, Europe and China. And we are not all playing by the same rules: some see quantum as a purely scientific endeavor, other see it as a key for commercial or even national competitiveness. And some see quantum as an intelligence capability to be developed and possibly withheld from others.

At the present time there is a lot of partial information and even misinformation about quantum information science out there. We spent a lot of time chasing down leads and trying to understand what are the real threats and opportunities, and what is just quantum hype.

**Q: Are quantum computers going to crack all of the world’s codes?**

No, your codes are probably safe for a long time into the foreseeable future.

Quantum computing got a huge boost in the 1990s when computer scientists discovered that a functioning quantum computer would be able to factor large numbers much faster than any conventional computer. That was the discovery that drove literally billions of dollars into quantum computing research.

Now, after nearly thirty years of research, it’s clear that we are many, many years away from building a quantum computer that could crack one of today’s encrypted messages. And that computer, once it is built, will be only able to crack one message at a time. So an attacker would need to have recorded your encrypted message, kept it for decades, and then scheduled time on a very expensive piece of equipment to reveal your secrets.

Another thing that’s unlikely is that quantum computers will be able to crack passwords. Although we’ve seen this possibility referenced in several high-level reports, the science just doesn’t back it up.

What quantum computers may be able to do within the next ten years is solve highly complex scheduling and optimization problems. Quantum “simulators” may be able to assist in the design of catalysts, drugs and new materials. Those are the most consequential possibilities of quantum in the near term.

**Q: What is “Quantum Winter?”**

We think that there is a very real possibility that companies will not see a significant pay off from quantum information science in the short term. If that happens, companies may retreat from their investments. The result would be a “quantum winter” similar to the “AI winters” that happened in the 1970s and 1980s.

A danger of a quantum winter is that it might cause governments and companies to ignore the real benefits of quantum information science, just as many in the 1990s were slow to realize the benefits of AI.

Another danger is that many people who are now studying quantum computing, and nothing else, might find themselves unprepared and the unable to spin up a new career.

**Q: Should we be teaching “quantum literacy” in grades K-12?**

We should, but that does not mean that we should be teaching quantum computing. We should be teaching quantum physics, rather than classical physics, from the beginning. There is no reason that primary school students can’t learn that light works like both a wave and a particle. This kind of thinking pays off in a whole bunch of areas—the idea that the world is not as it seems at first glance. And it’s easy to do experiments in the home or in the school with readily available materials to show the true quantum nature of our physical world. In fact, we had such an experiment in the back of the book, using three pieces of readily available polarizing film.

**Q: Where did the idea for this book come from?**

Chris was working on a law review article about quantum information science, when he and Simson were put next to each other on a flight from Tel Aviv to New York. Chris and Simson had known each other for almost two decades, but had never collaborated together project. Simson offered to review the article, and after reading it, we decided to join forces and pitch the project as a book to Cambridge University Press.

The post Q&A with Chris Jay Hoofnagle & Simson L. Garfinkel, authors of ‘Law and Policy for the Quantum Age’ first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>