There are two very significant consequences of this. The first is a levelling of the playing field so that (what start out as) smaller businesses can now compete disruptively with larger rivals. This can be seen in the finance market, where smaller ‘Fintech’ businesses are challenging existing ‘bricks and mortar’ banks for the projected $tr mobile payments revenues; and also, in the emergence of AirBnB, Uber and other platforms of the ‘sharing economy’ that have disrupted the worlds of accommodation, transport, finance, and so on.

And the second consequence, which in part develops from the first, is the emergence, within the past 20–30 years, of tech titans such as Facebook, Amazon, Apple, Netflix and Alphabet (Google), referred to generically as the FAANGs, whose economic and social power and influence is ubiquitous across the developed world.

The rapid growth and high productivity levels within the sector have demonstrated the economic benefits of entrepreneurship. Furthermore, research has shown that Science, Technology, Engineering & Maths (STEM) entrepreneurs in particular build on innovative foundations to create sustainable businesses.

My first experience of founding a digital technology business was more than 30 years ago. At that time, the vast majority of students graduating with STEM degrees would unquestioningly expect to join one of the major technology or service businesses that dominated the sector. Interestingly – apart from notable exceptions such as IBM, BAE Systems and Siemens – few of the tech giants of that day have survived. University courses reflected this situation by focusing their curricula and teaching methods on building core scientific and technical knowledge.

Over the past few years I have been active as a mentor and investor working with a range of start-ups and university spin-outs, through the Royal Academy of Engineering’s Enterprise Hub and some excellent privately-funded accelerator programmes. The vibrancy of the UK tech start-up sector is impressive and gives cause for optimism. Yet in London – a global hub of entrepreneurial activity – more than 30% of tech start-ups are currently struggling to recruit the talent they need to achieve their potential.

Against this backdrop it is interesting to review the ways that universities have adapted to the changes in the economy and the tech landscape; how are STEM students being equipped to start their own businesses or to join exciting early-stage tech companies and address their skills shortages, and how have curricula and teaching methods changed?

Recent experience of teaching business innovation and entrepreneurship to STEM undergraduates and graduate students has provided me with three insights:

- Students studying STEM disciplines readily spark at the opportunities offered by the fast-moving world of tech business, and are keen to apply their inherent analytical skills to business innovation and design.
- A significant majority of STEM students graduate with only a rudimentary and anecdotal appreciation of how digital techniques, technologies and processes apply to business and entrepreneurship, despite a formal education which includes a wealth of relevant digital knowledge and skills. Recent UK research discovered that only 10% of engineers and 5% of scientists (1% of physicists) are exposed to entrepreneurship education. There are of course some notable institutions that have embraced the change wholeheartedly – to their significant benefit.
- While there are many excellent books that address innovation and entrepreneurship, there is real need for a student textbook that coherently and in a structured and practical way addresses business for those planning to enter the modern digital world; a book that inspires, encourages and supports graduates to become job creators rather than merely to seek employment.

Research has shown that entrepreneurship can be learned and developed. That is not to say that everyone has the potential to create a $1bn business, but that, given a structured and practical introduction, the majority of STEM graduates will be able to contribute significantly to a start-up, early-stage development, or innovation within a larger organisation. And this means it is possible to address the skills shortage that currently frustrates both businesses and the wider economy, and also to address the related US problem of rapid technical obsolescence within STEM graduates.

In a nutshell, this was my motivation for producing *Digital Innovation and Entrepreneurship*: a book that bridges the gap between formal STEM education and the digital business world; and provides a way of introducing innovation and entrepreneurship as core components of the skillset of the modern digital professional.

Bridging this gap is important to STEM students because the digital economy now encompasses more than half of the world’s population, and succeeding in this sector increasingly demands an effective balance of knowledge and skills in both business and technology. It is also important to higher education institutions in an increasingly competitive environment in which students demand courses that better reflect the world they expect to join. Lastly, it’s important to the economy of every advanced nation that the continuing vibrancy of its tech sector is fuelled by well-equipped and motivated STEM graduates.

**Photo credits: Knight Foundation, Sandeepnewstyle**

In most departments the training starts with courses on ‘Mathematical Methods for Physicists’, where students learn the basics of integration, divs and grads, urgently required in the first year curriculum. But the role of mathematics in physics transcends that of a collection of methods. At universities where this truth is reflected in the curriculum, conceptual teaching is often outsourced to departments of mathematics. After all, who would be better prepared to teach mathematical concepts than mathematicians themselves?

The above system works, otherwise it would not be implemented at a majority of academic institutions. The question is if we can do better. We believe yes, and that the key to a modernized and more pedagogical approach to teaching mathematics in physics lies in a *stronger integration of conceptual and methodological elements in the mathematics education of physicists by physicists*.

What we have in mind is best explained on an example, the introduction of *vectors* early in the curriculum: the average beginner’s course starts from a hands-on introduction of vectors in *R ^{n}*, with emphasis on

There is a better way of getting started. At the very beginning, invest two or so weeks into a systematic, bottom-up discussion of algebraic foundations — sets, groups, number fields, linear spaces. Students trained in this way ‘see’ groups and vectors everywhere, in functions, matrices, *R ^{n}* and

Similar things could be said about integration theory, vector analysis, (differential) geometry, and other key disciplines of mathematics – conceptual and systematic introductions are rewarding investments which quickly pay off in fast and sustainable progress of students. Our belief in this principle is backed by experience. We have taught the reformed lecture course underlying our textbook about 10 times at two universities. Students trained in this way generally showed higher levels of confidence and proficiency in mathematics than those who went through the standard system. Remarkably, average and weak students are among those who benefit most. For them, it becomes easier to understand connections otherwise seen only by the best of the class. It should also be stressed that emphasis of mathematical concepts does not imply more abstraction. Yes, it does leads to more ‘hygiene’ in notation and to a language appearing to be ‘more mathematical’ than what is standard in physics courses. However, these elements are anchored in intuitive explanations, and hence aren’t perceived as abstract. They support students’ understanding, including that of concurrent courses in pure mathematics.

Encouraged by our uniformly positive experience we suggest a teaching reform at large, not just at our own universities. This was the principal motivation for the substantial work we put into converting our course into a textbook. It is meant to provide a template for what we hope may become a more rewarding introduction to the mathematics needed in contemporary physics.

]]>I had been interested in RH for some time, studying the zeta function through flows such as ds/dt=xi(s), which provided an equivalence. However this work, which had a topological basis, ‘’hit the wall” at the point where the structure of the flow near an essential singularity appeared to be important. The underlying theory was not available, and in the circumstances, I was not able to develop it.

A visit to the University of Waikato by Tim Trudgian stimulated work together on aspects of Robin’s inequality and its RH equivalence. In addition to his sterling detailed work on Volume One Chapter 7, his own published work improving Turing’s method for zeta zero analysis was of great value in many chapters.

I approached CUP at some stage near the completion of a draft of volume 1 and they showed interest. However the expert feedback they received was mixed – not only had volume one not covered some of the most valuable equivalences to RH, it did not cover GRH. This was considered to be much more useful than RH for applications, and the idea of two volumes took shape.

Regarding Cambridge, I had been impressed with their expertise and dedication to publishing good mathematics when I worked with them, supplying an appendix and software for Dorian Goldfeld’s book *Automorphic Forms and L-functions for the group GL(n,R) *(Cambridge, 2006). The new experience writing “Equivalents” showed that this was no exception.

The writing process did not always go smoothly. Some parts, including whole chapters in one case, were scraped. I decided that the details were either too technical or would be too taxing for the reader. My conceptualized target average reader was a graduate student considering potential research problems in pure mathematics and looking for accessible problems. I avoided using results which were at the pre-print stage at the time of writing. This meant sometimes leaving out some published results which depended on unpublished work.

For volume one, the seminal paper of Rosser and Schoenfeld of 1962, and other related papers, provided particular organizational challenges. For volume two, I spent a long time working with Zagier’s group representation equivalence. Eventually I decided to give up. It would be too difficult to give the average reader an adequate background in the specialized theory. In addition Zagier’s method was not able to be extended to number fields.

Given this graduate student target audience, I included quite a lot of background material. For example the chapter on numerical estimates for arithmetic functions, and work of Erdos and others on abundant numbers in volume one. In volume two, an extensive set of appendices provided proofs for the more specialized results referred to in the body of the text, which the reader might not necessarily meet in graduate courses.

I was often asked whether I had (by now) solved RH! Writing a tome of this size does not leave too much energy for such grandiosity, but I did twice believe I might have disproved RH. This was while writing volume two, once when considering integral equations, namely the method of Sekatskii, Beltraminelli ad Merlini in Volume Two Section 8.3, and once when developing examples for Weil’s explicit formula in Volume Two Section 9.5. In both cases the approach came to nothing.

After the volumes were published I created a website for Errata and notes, GRHpack and RHpack. I have had an excellent volume of feedback giving corrections and other comments which have been included or will be once time permits – this is especially welcome. Some folk even indicated they had been right through both volumes!

As expected, equivalents to RH and GRH continue to evolve. In late 2017, the University of Waikato had a visit from Ken Ono who gave a fascinating lecture related to the Jensen polynomial equivalence of Polya from 1927, namely that RH is equivalent to all of the Jensen polynomials of the Xi function being hyperbolic. He described a discovery Don Zagier, Michael Griffin, Larry Rolen and himself which, among other advances, shows that for each degree all but a finite number of the Jensen polynomials are hyperbolic. This work is being written up and will be referenced in the “Errata and notes” relating to Volume Two Section 4.4. when a pre-print appears on ArXiv.

And In February 2018 Brad Rogers and Terence Tao published on ArXiv an article entitled “The de Bruijn-Newman constant is non-negative”, giving the RH equivalence Lambda=0. A full report on this work would make a nice addition to Volume Two Chapter 5. Both of the works of Rogers/Tao and Griffen/Ono/Rolen/Zagier will be included in a second edition, should one be published.

Find out more about Kevin Broughan’s 2-volume work *Equivalents of the Riemann Hypothesis *here*.*

The famous American physicist, Richard Feynman, was born a 100 years ago on the 11^{th} May in 1918, and it is worthwhile spending a few moments reflecting on what makes his achievements so enduring. To the general public, Feynman first became widely known with the publication in 1985 of a best-selling collection of stories from his life in physics called *‘Surely You’re Joking, Mr Feynman’*. The title refers to an incident in his introduction to graduate school at Princeton at an event called the ‘Dean’s Tea’. This was unfamiliar territory for Feynman who had grown up in Far Rockaway, a neighborhood in the New York City borough of Queens, and who had gone to MIT for his undergraduate degree. But it was Feynman’s participation in the presidential commission to investigate the Space Shuttle Challenger disaster in 1986 that made him one of the best-known physicists in the world. At a public meeting of the Commission, Feynman famously performed a public demonstration of the cause of the shuttle disaster using a rubber O-ring, a clamp and a glass of ice water. It is still worthwhile looking at the video of the event on YouTube .

To physicists, Feynman is revered for many reasons but is probably best known for the ‘Feynman diagram’ approach to calculations of quantum field theory. Feynman’s approach to field theory calculations was pictorial and in marked contrast to the more formal mathematical approach of his fellow Nobel Prize Winner, Harvard professor Julian Schwinger. As Schwinger later said:

*“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”*

“Like the silicon chips of more recent years, the Feynman diagram was bringing computation to the masses.”

Feynman diagrams are now an integral part of theoretical physics. Ironically, Freeman Dyson, the person who proved that Feynman’s intuitive approach was actually the same as Schwinger’s more mathematical approach, never won the Nobel Prize although he was instrumental in getting Feynman’s space-time approach accepted by people like J. Robert Oppenheimer and the rest of the physics elite.

When I was an undergraduate student in Oxford I first came across Feynman through his famous ‘Red Books’ – the three-volume set of his ‘Lectures on Physics’. Feynman dedicated two years of his life to creating a two-year introductory course in physics for Caltech students that covered most of modern physics – mechanics, kinetic theory, electromagnetism and quantum mechanics. Although many of the students reportedly found the lectures hard-going despite Feynman’s inimitable style of lecturing, his Lectures on Physics have become a staple part of the education of much of the physics faculty around the world.

After completing a D.Phil (the Oxford equivalent of a Ph.D.) in theoretical physics in 1970, I was excited to be awarded a Harkness Fellowship to go to Caltech for two years as a post doc. Just before I left Oxford, Feynman had published a paper on ‘partons’ – an intuitively appealing picture of the proton as made-up of point-like constituents. There was also great interest in the new experimental results from SLAC, the Stanford Linear Accelerator Centre, on ‘deep inelastic scattering’ of electrons from protons. Feynman had originally only applied his parton ideas to proton-proton scattering but on a visit to SLAC he had recently given a seminar showing how the new deep inelastic scattering results could be understood using his parton model of the proton.

I arrived at Caltech in 1970 feeling both trepidation and excitement and it was like moving from the slow lane to the fast lane on the freeway. At Oxford we had sort of absorbed the idea that the physics world revolved a little around Oxford, but at Caltech, it was clear that, to a first approximation, the UK, Europe and the rest of the world were largely irrelevant. This was the ethos of the theory group at Caltech with its two Nobel Prize winners, Richard Feynman and Murray Gell-Mann. In actual fact, my old professor in Oxford, Dick Dalitz, was one of the few physicists who had taken seriously the proposals by Gell-Mann, and independently, by George Zweig, then a professor at Caltech, for quarks as fundamental constituents of matter. Dalitz had developed a detailed quark model for baryons and mesons and showed that that this had remarkable power to reproduce many features of the hadron spectrum found by experiment. Despite its clear theoretical inconsistencies, Dalitz regarded his explicit quark model as similarly useful as Bohr’s equally inconsistent model of the atom. Just like Bohr’s model, Dalitz was convinced that the quark model pointed the way to some deep truths about Nature.

Feynman was never one to take other people’s calculations on trust and so he had developed his own version of the quark model with graduate student, Finn Ravndal, and post doc, Mark Kislinger. Perhaps because of his work with them, Feynman often used to have lunch with the graduate students and post docs at the Caltech campus cafeteria, universally known as ‘The Greasy’. It was here that I first heard versions of Feynman’s stories that he and fellow bongo drummer, Ralph Leighton, later wrote up for publication. The intellectual rivalry between Gell-Mann and Feynman was legendary and Gell-Mann frequently grumbled about what he regarded as Feynman’s ‘myth making’.

My most intimidating moment at Caltech was at an informal lunch-time lecture I had agreed to give to the experimental particle physicists. The group was led by new Nobel Prize winner, Barry Barish, with Frank Sciulli and they had just been awarded funding for an important experiment on deep inelastic neutrino scattering. Feynman’s parton explanation of deep inelastic electron scattering had been written up – with due acknowledgement to Feynman – by ‘BJ’ Bjorken and Manny Paschos who had both attended his lecture at SLAC. All I was going to do in my lecture was to explain how the parton model could be applied to neutrino scattering. However, you can imagine my surprise when I arrived to give my talk to see Feynman sitting in the audience. In fact, all went well until I was nearing the end of the lecture when Feynman jumped up and said:

* “Stop. Draw a line. Everything above the line is the parton model – below the line are just some guesses of Bjorken and Paschos.”*

As I rapidly became aware, the reason for Feynman’s sensitivity on this point was that Murray Gell-Mann was going around the Lauritsen building at Caltech growling things like *“Anyone who wants to know what the parton model predicts needs to consult Feynman’s entrails.”*. The point that Feynman was making was that all the results above the line in my seminar were identical to predictions that Murray had derived using fancier algebraic techniques. Feynman just wanted to dissociate his parton model predictions from some of the wilder parton model predictions of others. My lecture was just an opportunity for him to do that.

What made a Feynman lecture unique? The well-known Cornell physicist, David Mermin, once said “*I would drop everything to hear him give a lecture on the municipal drainage system”*. Why was this? An LA Times editor captured the essence of a Feynman lecture with the words:

*“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”*

“A lecture by Dr. Feynman is a rare treat indeed. For humor and drama, suspense and interest it often rivals Broadway stage plays. And above all, it crackles with clarity. If physics is the underlying ‘melody’ of science, then Dr. Feynman is its most lucid troubadour.”

The article went on to say:

*“No matter how difficult the subject – from gravity through quantum mechanics to relativity – the words are sharp and clear. No stuffed shirt phrases, no ‘snow jobs’, no obfuscation.”*

In his Nobel Prize lecture, instead of giving a talk about the beautiful Feynman diagram framework he had created, Feynman chose to show some of his miss-steps along the way to his eventual success:

*“That was the beginning and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know too much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held by this theory, in spite of all the difficulties, by my youthful enthusiasm.”*

What of Feynman’s legacy today? In 1981, at a conference at MIT, Feynman gave a lecture in which he asked the question *“Can physics be simulated by a universal computer?”* He then answered his question with the statement:

*“I’m not happy with all the analyses that go with just classical theory, because Nature isn’t classical, dammit, and if you want to make a simulation of Nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem.” *

Feynman then put forward an example of a quantum computer and now, over 35 years later, physicists and engineers all around the world are seriously trying to build and operate such a computer.

Finally, Feynman was always passionate about the need for what he called ‘utter scientific integrity’. In a commencement address to Caltech students in 1974 he said:

*“Learning how not to fool ourselves is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis.” *

In his fine biography of Feynman, James Gleick memorably summed up Feynman’s philosophy towards science with the words:

*“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”*

“He believed in the primacy of doubt, not as a blemish upon on our ability to know but as the essence of knowing.”

Tony Hey

Kirkland, Washington

11^{th} May 2018

Richard Feynman write the Prologue and Epilogue to Tony’s book ‘The New Quantum Universe’ Second Edition, read these chapters for free on Cambridge Core.

]]>

After this incident, the particulars of Klein’s life become difficult to separate from those of her eventual husband, George Szekeres (the one who proved the more general statement about *n*-gons). Neither started out as mathematicians. Because of the restrictions placed on Jews in Hungary in the late 1920s, only two students from Szekeres’s school could study science or mathematics at the university in Budapest; Márta Svéd took the mathematics position, so Klein necessarily studied physics instead. George studied chemical engineering, motivated by his family’s leather business. The two became refugees in Shanghai, and then after the end of World War II moved to Adelaide, where they shared an apartment with Márta Svéd and her family. George became a university mathematics lecturer and Esther raised their children while working as a mathematics tutor. In 1964, the family moved to Sydney. Esther became one of the first mathematicians at the newly-founded Macquarie University, where she is “fondly remembered as a gifted and inspiring tutor”; Macquarie gave her an honorary doctorate in 1990. She and her husband died within hours of each other, in 2005.

Their joint *Sydney Morning Herald* obituary writes of Esther that “The mathematical love of her life was always geometry, in which she outshone George.” So with this as background, I was interested to learn more about some of her work in geometry. I found a paper, “Einfache Beweise zweier Dreieckssätze”, that she published in 1967 in the journal *Elemente der Mathematik* (in German, despite being a Hungarian in Australia). The title promises two theorems about triangles, both of which concern what happens when you inscribe a triangle *XYZ* into a larger triangle *ABC* (with *X* opposite *A*, etc), dividing *ABC* into four smaller triangles.

Szekeres’s two theorems are that the area and perimeter of the central triangle *XYZ* are at least equal to the minimum area or perimeter among the three surrounding triangles. It’s possible for *XYZ* to be one of the smallest triangles, but this can only happen when *XYZ* has equal area or perimeter to another of the four small triangles; it can never be the unique smallest one. For instance, when *XYZ* is the medial triangle of *ABC*, all four smaller triangles are congruent to each other (and similar to the big triangle).

The theorems themselves are not original to Szekeres, and her paper details their history of publication and solution in various mathematical problem columns. The perimeter inequality is also connected with a classical piece of geometry, Fagnano’s problem of finding an inscribed triangle *XYZ* of the minimum possible perimeter.

Stripped of some unnecessary detail, her proof of the area theorem is simple and elegant. Suppose that *BX:BC* is the smallest of the six ratios into which the three points *XYZ* divide the sides of the triangle; the other five cases are symmetric. Draw two additional lines, *L* through *X* parallel to *AB*, and *M* parallel to *XZ* but twice as far from *B*. Then it follows from the choice of *BX:BC* as the smallest ratio that *Y* lies on the segment of *AC* on the far side of *B* from *L*, and that *M* separates *X* from this segment.

So if we place a point *D* at the intersection of line *M* and segment *XY*, we have

area(*XZB*) = area(*XZD*) ≤ area(*XYZ*),

where the left equality relates two triangles with the same base *XZ* and equal heights, and the right inequality is containment of one triangle in the other.

Most of Szekeres’s other publications were in mathematical problem columns, and included similar styles of reasoning applied to other geometry problems. Beyond geometry, the subjects of her research included arithmetic combinatorics and graph theory. Still, it is clear that it is in geometry, and in particular in the problem of convex polygons in point sets, where she made her most far-reaching contribution to mathematics. Her problem became foundational for two major fields, discrete geometry and Ramsey theory, and has led to a huge body of research by other mathematicians.

**For other posts by David Eppstein, visit his blog. Also make sure to find out more about Forbidden Configurations in Discrete Geometry. **

I was also able to attend a special seminar series at the Institute for Advance Study Princeton on automorphic forms and L-functions and take several courses with Dorian Goldfeld on a related topic, and attend the New York joint CUNY, NYU, Columbia number theory seminary and dinners. It was during one of these latter that Dorian asked if I would write a software package in Mathematica for the book he was writing on “GL(n,R)”, a group which for quite deep reasons is important in the field of automorphic forms and L-functions in number theory. His concept was that the intensive matrix manipulation, and other computations, such as finding the symbolic form of Casimir operators, would be greatly assisted by having a package of related functions.

Now I had had some experience with Mathematica programming having supervised a PhD candidate Rene Ferdinands (now at the University of Sydney) who used its symbolic matrix inversion and simplification facilities as part of solving a system of ODE’s with an application to that popular Australian sport cricket. I said yes to Dorian, not knowing the task ahead was not light.

As the chapters of Dorian’s book appeared as drafts I would go through them, doing some editing and thinking through which parts would lend themselves to computation. Along the way I learnt some of the tricks of the trade of mathematics expository book writing – such as repeating definitions near where they are used, lightening the burden on the reader’s memory. In the end the manual for the software came to over 70 pages, and Cambridge generously agreed to include it as an appendix.

This book I believe fills a large gap in our available texts, especially for beginning researchers. It is very concrete, focusing on the group SL(n,Z) acting on GL(n,R), but building up progressively through n=2, n=3 and then the general case. It includes finely worked full proofs. Dorian had the benefit of many conversations with another Columbia mathematician, Herve Jaquet, who was a founder of the theory of L-functions and automorphic forms in the wider setting of Adels, which Dorian covered in later works, also published by Cambridge.

**The software “GL(n)pack” is available here. **

**Find out more about Equivalents of the Riemann Hypothesis, available as a 2-volume hardback set and separate volumes 1 and 2. **

**Also check out Automorphic Forms and L-Functions for the Group GL(n,R) by Dorian Goldfeld**

Check out the rest of the blog post here

Want to find out more about the book? Take a look at our website and co-author Iain Currie’s post *Mortality By The Book*

Fast forward twenty five years, and the picture is far less clear. The complexity of computers and software has grown to such an extent that even relatively small smartphone applications are created by teams of developers, and nobody understands every aspect of a CPU chip, much less an entire PC or tablet. Who now should be classified as an expert? One possibility is that an expert is a person who may sometimes need to look up the details of a rarely used command or feature, but who is never confused or frustrated by the behaviour of the system or software in question (except where there is a bug), and never needs help from anyone, except perhaps on rare occasions from its creators.

The complexity of computers and software has grown to such an extent that even relatively small smartphone applications are created by teams of developers, and nobody understands every aspect of a CPU chip, much less an entire PC or tablet.

This rather stringent definition makes me an expert in only two areas of computing: the Fortran programming language, and the mathematical computation system Maple. An argument could be made for the typesetting system LATEX, but whilst this has a large number of expert users, there is also a much smaller group of more exalted experts, who maintain the system and develop new packages and extensions. It would be fair to say that I fall into the first category, but not the second.*

How does one achieve expert status? Some software actively prevents this, by hiding its workings to such an extent that fully understanding its behaviour is impossible. Where it is possible to gain expert status, I have experienced two very different routes, both starting during my time as a research student, when it became clear that Fortran and Maple would be useful in my work. There were several parallels. I knew a little about both, having used them for basic tasks as an undergraduate. However, working out why things went wrong and how to fix them was time-consuming and unrewarding, since it often relied on magic recipes obtained from unreliable sources, and in many cases I didn’t really understand why these worked, any more than I understood why my own attempts had not. I realised then that knowing a little was at the root of these problems. Partial knowledge, supplemented by contradictory, outdated and even downright bad advice from websites and well-meaning individuals (some of whom invariably labour under false pretences of their own expert status) is not an efficient way to approach scientific computing. In fact it’s just a recipe for frustration. In the case of Fortran, fixing this turned out to be easy, because there are lots of good books on the subject. Reading one of these eliminated all of my problems with the language at a stroke. I can’t claim that I remembered every command and its syntax, nor do I know them all now. This is hardly surprising — the Fortran Language Standard (a very terse document that sets out everything the language provides) now extends to more than 600 pages. Instead, the book provided a general picture of how things work in Fortran, and showed the right way to go about tackling a problem. This investment in time has since paid itself back hundreds of times over.

The route to expert status in Maple was far more challenging. Its own help pages give a very comprehensive description of individual commands, but they are intended as a reference guide, and if it’s possible to become an expert using these alone, then I never discovered the correct order in which to read them. I found a number of books on Maple in the university library, but most were too basic to be useful, and others focused on particular applications. None seemed likely to give me the general picture — the feel for how things work — that would make Maple into the time-saving resource it was intended to be.

The picture became clearer after I taught Maple to students in three different courses. Nothing encourages learning better than the necessity to teach someone else! Investigating the problems that students experienced gave me new opportunities to properly understand Maple, and eventually the few remaining gaps were filled in by the Programming Guide. This is a complex document, similar in length to the Fortran Language Standard, but with more examples. Personally I would only recommend it to readers with experience of programming language specifications. Students now started to ask how I came to know so much about Maple, and whether there was a book that would teach them the same. Since no such book existed, I decided to write one myself. As the old adage goes, if you want something doing properly, do it yourself. The project soon began to evolve as I tried to set down everything that the majority of Maple users need to know. I’ve always hated books that skirt around important but difficult topics, so where before I might have used a dirty trick to circumnavigate a problem, now I felt compelled to research exactly what was going on, and to try to explain it in a simple, concise way. When the first draft was complete, I approached Cambridge University Press (CUP). The editor arranged for reviews by four anonymous referees**, and by Maplesoft’s own programming team. This led to several major improvements. My colleague, Dr Martyn Hughes, also deserves a mention for his efforts in reading and commenting on four different drafts. Meanwhile, Maplesoft continued to release new editions of their software, and the drafts had to be revised to keep up with these. The cover was created by one of CUP’s designers, with instructions that it should not look too ‘treeish’ — one might be surprised by the number of books that have been written about Maple syrup, and it would be a shame for Understanding Maple to be mixed up with these by potential readers browsing the internet. Then there were the minor details: how wide should the pages be? What font should be used? Should disk be spelled with a ‘c’ or a ‘k’? Could quotes from other sources be used without the threat of legal action over copyright infringement? One rights holder laughably tried to charge $200 for a fragment of text from one of their books. Needless to say, no greenbacks were forthcoming.

The resulting book is concise, with all the key concepts needed to gain an understanding of Maple, alongside numerous examples, packed into a mere 228 pages. It gives new users a solid introduction, and doesn’t avoid difficult topics. It isn’t perfect (in fact I have already started to list revisions that will be made if a second edition is published in the future) but I’ve seen very few problems that can’t be solved with the material it contains. Only time will tell if *Understanding Maple* will it create new experts. At the very least, I would certainly like to think it will make Maple far easier to grasp, and help new users to avoid some of the traps that caught me out many years ago.

**Find out more about Understanding Maple and order your copy today**

*One can always identify an expert LATEX user by the fact that they know enough to realise that they are not a ‘real’ expert.

**Actually not wholly anonymous; the name of the referee who made the most suggestions for improvements was left in one of the documents sent to me by CUP, so a grateful mention can be made here of Michael Monagan, one of the original creators of Maple.

]]>When Peter began working, at System Development Corporation (SDC), he programmed using punch cards on time-shared main frame computers. By the time he had retired from Microsoft Research, he had implemented and optimized arithmetic code that was running on the mobile processors in smartphones. Peter worked as a programmer from the earliest stages of software engineering when computers needed to be held in their own buildings until the era of ubiquitous computing where people carry computing devices in their pockets. Both his algorithmic and mathematical contributions and his work as a programmer were instrumental to these massive advances that spanned his career — a career of more than forty years, impressive by the standards of the software industry.

This book is a tribute to his scientific work. Every chapter in this book starts with an idea by Peter which made a significant contribution to the field of computational number theory or cryptography. This idea is explained in detail and the subsequent research which followed this invention over the years is summarized. Most of Peter’s contributions are inspired by the integer factorization problem, his main research interest since high school. Hence, multiple chapters are dedicated to various techniques how to factor integers. Another chapter is dedicated to Montgomery curves and related work: such curves are used in modern instantiations of key agreement protocols since they offer performance and security benefits over other types of elliptic curves. Hence, this book can serve both as reference material for cryptographers and security experts as well as a good introduction for computational number theory enthusiasts.

Find out more about *Topics in Computational Number Theory Inspired by Peter L. Montgomery*

Yet, large-scale phylogeny estimation turns out to be much more difficult than expected. First, all the best methods are computationally intensive, and standard techniques do not scale well to large datasets; massive parallelism helps but does not really address the basic challenge inherent in searching an exponential search space. Another issue is that the statistical models of sequence evolution that properly address genomic data are substantially more complex than the ones that model individual loci, and methods to estimate genome-scale phylogenies are (relatively speaking) in their infancy compared to methods for single gene phylogenies. Finally, there is a substantial gap between performance as suggested by mathematical theory (which is used to establish guarantees about methods under statistical models of evolution) and how well the methods actually perform on data – even on data generated under the same statistical models! Indeed, this gap is one of the most interesting things about doing research in computational phylogenetics, because it means that the most impactful research in the area must draw on mathematical theory (especially probability theory and graph theory) as well as on observations from data.

Computer scientists have brought innovative algorithm design techniques into computational phylogenetics that are dramatically improving the accuracy and scalability of phylogeny estimation. Many of these new methods are now being used by evolutionary biologists to compute multiple sequence alignments, construct species trees and phylogenetic networks from genome-scale datasets, and make biological discoveries. It is clear that computer science techniques can- and will- enable breakthroughs in biological discovery for the genome-scale datasets that are being assembled around the world.

*Computational Phylogenetics: An Introduction to Designing Methods for Phylogeny Estimation* is designed to train the next generation of algorithm developers so that they can develop these new methods and enable these breakthroughs. The book is self-contained, and no biology background is needed. Although the focus is on communicating mathematical foundations and innovative algorithm design, much of the material is accessible to biologists and others who are interested in critically evaluating the scientific literature about phylogeny estimation methods in this post-genome era.

Find out more about *Computational Phylogenetics *and Tandy Warnow