The second edition of my textbook “*Numerical Methods in Physics with Python*” was published by Cambridge University Press in July 2023. Already since its first edition, the book’s focus was pretty clear: foundational numerical methods were derived from scratch, implemented in the Python programming language, and applied to challenging physics projects. That first edition was published less than three years ago, so it may be worthwhile to see how the updates came about (and thereby also explain why a second edition was warranted). Over the last several semesters, I have been fortunate enough to teach both undergraduate and graduate courses on computational physics out of my textbook (repeatedly); thus,

“the changes to the book between editions have been directly driven by what worked in the classroom (and what didn’t)”.

The undergraduate course (renditions) revolved around a subset of the numerical methods/codes in the first edition, but I found that some further topics needed to be introduced, most notably on linear algebra (singular-value decomposition), optimization (golden-section search), and partial differential equations (finite-difference approaches). Perhaps even more crucially, given the heavy emphasis on math and programming in the undergrad version of the course, some students were left wanting more of a physics bent: to address that need, I created a large number of problems on physical applications both on standard themes and on topics that I have not encountered in other computational-physics textbooks (e.g., the BCS theory of superfluidity, the Heisenberg uncertainty relation, or the stability of the outer solar system). In each case, the idea was to complement the end-of-chapter (worked-out) Projects with (sometimes short, other times fairly extensive) problems showing how the numerical methods (and programming skills) developed in a given chapter can be put to use when studying physics. When introducing these new physical themes into the second edition, I sometimes found it natural to split them across chapters; to give but one example, the band gaps of solid-state physics are successively studied as a plotting, linear-algebra, root-finding, minimization, and integration problem.

The graduate course (incarnations) that I taught also necessitated new physics problems: perhaps unsurprisingly, these were closer to modern-day research (e.g., scalar self-interacting field theory, the gravitational three-body problem, or the optical Bloch equations). Given that the intended audience here was more advanced, I also worked out from scratch things that are usually taken for granted (e.g., the minimax property of Chebyshev polynomials or asymptotic normality) when not passed over in silence (e.g., the computation of complex eigenvalues or an iterative approach to the fast Fourier transform). Turning to the lectures: these typically focused on the most equation-heavy numerical methods from the first edition; I supplemented them with new material on many-dimensional derivative-free optimization as well as nonlinear regression. The latter led me to the hot topic of artificial neural networks (the power of which is exemplified by the accompanying plot). Speaking of regression, the single most important change in the second edition is a new section on statistical inference (which somehow manages to be both concise and lengthy): this starts out by recovering/justifying first-edition results (e.g., regarding the interpretation of the chi-squared statistic) before turning to the Bayesian approach, uncertainty bands, etc. While writing this new section I realized that the discussion of data analysis in many introductory (or not so introductory) textbooks is questionable, as summarized in *this spin-off journal article*.

A crucial aspect of the first edition was the inclusion of dozens of complete Python implementations of numerical methods or physical applications. The (six) new sections in the second edition have also led me to write six new codes, which are given at the companion *website* and discussed in gory detail in the main text. The fifteeneightyfour blog post I wrote when the first edition of the textbook came out (*see What’s wrong with black boxes?* ) goes over the motivation behind and significance of the codes. In the same spirit of working things out from scratch, the codes are further probed in the (140) new end-of-chapter problems. Speaking of which, typically computational-physics textbook authors either produce no solutions to the problems or provide solutions only to instructors teaching for-credit courses out of the textbook. I have followed the latter route, providing complete solutions of all programming problems to instructors; these are locked, since course instructors would not be able to assign them as homework problems otherwise. Even so, at *the companion website* I’m also providing a subset of the solutions to all readers, as a self-study resource.

In addition to the new sections, codes, problems, and solutions discussed above, while putting together the second edition I took the opportunity to read through the entire book multiple times and thoroughly tweak it, with a view to making the work more student-friendly. This ranged from introducing new footnotes or figures, to complete rewrites of first-edition sections, all the way to revamping the index. I have certainly enjoyed navigating this book’s wine-dark sea; perhaps you will, too.

Title: *Numerical Methods in Physics with Python*

Author: Alex Gezerlis

The post Computational physics gets a revamp first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>The post What’s wrong with black boxes? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>In my recently published textbook, *Numerical Methods in Physics with Python*, I opt against the use of black boxes. Instead, I show students how to mathematically derive numerical techniques, how to implement them in the popular programming language Python, as well as how to use them to study problems that show up in physics.

there’s something vaguely aristocratic about the admonition “no need to trouble your head with that”

Much of my book is intentionally dedicated to reinventing the wheel. This may give rise to the question: what’s wrong with students taking standard methods for granted and, instead, spending all their time learning how to apply them to physics? In what follows, I go over several interrelated arguments for why doing things “from scratch” is a good idea. While I have computational physics in mind, many of the points I make below may also be of interest to those working in other areas of STEM.

- What’s wrong with focusing on the physics instead of getting distracted by programming? This brings to mind the trend of de-emphasizing deltas and epsilons when teaching calculus. Of course, even hard-core anti-rigorists wouldn’t dream of telling beginners that, e.g., integrals are a
*mathematical*tool, so physics students don’t need to trouble themselves with mastering the concept. Similarly, one shouldn’t let students tackle complicated numerical integration problems without first exposing them to the standard techniques. Having a solid grounding in math and programming makes it easier (not harder) to focus on the physics. - What’s wrong with students using a code they found on the web? In today’s climate, with fake news being a constant cause for concern, it is surprising to see students sometimes being willing to blindly trust programs of unknown provenance (as long the relevant website is official-looking). An article I recently wrote with a colleague discusses examples where standard textbooks themselves cannot always be trusted, so random preprints or websites are clearly all the more suspect.
- What’s wrong with reading the documentation of one’s favorite library to decide which technique to use? It is true that excellent libraries do exist (e.g., NumPy and SciPy in the Python world) and it’s perfectly OK for an experienced user to employ their functionality. However, even if you trust the source, knowing which specific numerical method to use is not so trivial: a library’s documentation, typically made up of a list of functions, cannot provide much qualitative insight. As an example, a web search for libraries on interpolation will nudge you toward the use of splines (i.e., you likely wouldn’t even learn that Lagrange interpolation is a good idea). Other times, the documentation merely states that the library provides a wrapper to another library’s functionality.
- What’s wrong with using a library’s documentation to see what to do when the algorithm fails? This is certainly possible. However, it’s common for the documentation to lag behind the functionality contained in the code itself. This is often the case for numerical methods: the documentation typically doesn’t provide any equations, so there are technical aspects (e.g., analytical manipulations that could help, or related techniques with different convergence properties) that the reader wouldn’t even know to look for.
- What’s wrong with reading the actual code for a function in an open-source library? The short answer: you could, but you likely won’t. Even if you do read it, you’re almost certainly not going to like what you see. Libraries are designed to be efficient and general; as a result, they don’t typically make for good bedtime reading. Even so, it is important for beginners to learn how to structure their own programs well, so reading well-written (and intelligible) example code is valuable, just like reading good prose can help with one’s writing.
- What’s wrong with acknowledging that most people are users, not library authors, so they shouldn’t have to do things the hard way? First, all library authors started out as users; they were lucky enough to get good foundations that allowed them to progress. Second, there’s something vaguely aristocratic about the admonition “no need to trouble your head with that”. Third, even though the typical physics student is not going to go on to rewrite NumPy from scratch, students may end up writing widely used programs in a lab, in the industry, in finance, or in the software sector.
- What’s wrong with just taking somebody else’s word for it? The very question goes against the culture associated with modern science; trying to figure things out for oneself is a crucial aspect of doing physics. (Not for nothing is Nullius in verba the Royal Society’s motto.) The personality traits that make one uncomfortable with a code they cannot read/understand are the same as those that make one uncomfortable with a recipe-type approach to numerical methods, namely a code that implements a formula which drops from the sky. In Descartes’ famous image, when you’re walking alone in the dark, you would be well-advised to move slowly: you might not get very far, but at least you would be guarding against the possibility of falling.

The majority of *Numerical Methods in Physics with Python* is devoted to deriving numerical methods from scratch; the 57 computer programs (also found at the companion website https://numphyspy.org) are a crucial component of the book. Of course, deriving and implementing everything from scratch is merely a noble ideal; I had to make some tough decisions about where to start and where to stop. As per another memorable image (due to Neurath), we are like sailors who must rebuild their ship on the open sea, so we cannot start afresh from the bottom.

**Numerical Methods in Physics with Python **

Author: **Alex Gezerlis,** *University of Guelph, Ontario*

Published: October 2020

Paperback ISBN: 9781108738934

Hardback ISBN: 9781108488846

eBook also available

The post What’s wrong with black boxes? first appeared on Fifteen Eighty Four | Cambridge University Press.

]]>