In fact the volume is about opening a genuine public debate on the true nature of space and time, starting with a public panel discussion on this topic in 2006 in Cambridge, England. Where this came from was my increasing unease about the portrayal of fundamental physics — quantum gravity in particular — as already solved by string theory when, in fact, theoretical physics is in need of fresh profound ideas and contact with experiment, when these are the most exciting and turbulent of times.

I also insist in the preface to On Space and Time that this debate **needs** to involve not only scientists but the wider public. The reason is that scientists’ ideas have to come from somewhere, from sitting around in cafes, from contemplation of art. We don’t know where the key revolutionary idea is going to come from. Put another way, to progress, scientists need now to see what Science **is**, which means they have to step outside it and see it in part as a non-scientist.

In particular, and this being Christmastime, I want you to ask yourself what does someone singing a Christmas carol have to say about quantum gravity? What does that person have in common with a theoretical physicist? What I think they have in common is contemplation of the infinite. I mean a sense of something bigger than ourselves. As a confirmed atheist I won’t call it God, but its a sense of awe at the Universe and a wonder about our place in it. My approach as a theoretical physicist is to use mathematics and the scientific method to explore the issue, while a carol singer is surely using other means to ‘connect’.

In fact it is only since the 17th century Enlightenment that Science somehow replaced religion as the font of physical truth. But the Scientific Method pioneered by Hooke and others replaced religious dogma, good, yet itself is based on certain assumptions and ways of doing things, of dividing knowledge into ‘theory’ and ‘experiment’, in other words some other dogma.

As a scientist I am 1000% committed to the Scientific Method but I see it as a particular way of exploring reality. One that we might now need to understand better by seeing it from the outside.

What I am going to argue now is that what we know about quantum gravity — what we have seen in earlier posts — is telling us that the Scientific Method itself is perhaps **the** fundamental ‘metaequation’ of physics. To see what I have in mind, consider playing chess but forgetting or not being aware of the **rules** of chess (perhaps because you learned them at a very early age). Then as you play, you experience the reality of chess, the frustration of being checkmated and so forth. In this sense the joining of a club, the acceptance or rules or constraints ‘creates’ a bit of reality, the reality of chess.

**What if Physical Reality is no different, created by the rules of looking at the world as a Scientist?** In other words, just maybe, as we search for the ultimate theory of physics we are in fact rediscovering our own assumptions in being Scientists, the Scientific Method?

To explain why I think so, we need to think about the nature of representation. Imagine a bunch of artists gathered around a scene X, each drawing their own representation of it from their angle, style and ethos. Any one bit **x **of the scene is represented by the artist **f** of the collection as maybe a fleck of paint on their canvas. Now, the amazing thing — and *this is possibly the deepest thing I know in all of physics and mathematics* — is that one could equally well view the above another way in which the ‘real thing’ is not the scene X, which might after all be just a useless bowl of fruit, but the collection, X* of artists. So it is not bits **x** of X being represented but rather it is the artists **f** in X*. Each point **x** of the fruit bowl can be viewed as a representation of X* in which the artist **f** is represented by the same fleck of paint as we had before. By looking at how different artists treat a single point **x** of the fruit bowl we can ‘map out’ the structure of the collection X*.

What this is is a deep duality between observer and observed which is built into the nature of representation. Whenever any mathematical object **f** represents some structure X we can equally well view an element **x** of X as representing f as an element of some other structure X*. The same numbers **f**(**x**) are written now as **x**(f). In mathematics we say that X and X* are dually paired. Importantly **one is not more ‘real’ than the other**.

So within mathematics we have this deep observer-observed or measurer-measured duality symmetry. We can reverse roles. But is the reversed theory equivalent to the original? In general no; the bowl of fruit scene is not equivalent to the collection of artists. But in physics perhaps yes, and perhaps such a requirement, which I see as coming out of the scientific method, is a key missing ingredient in our theoretical understanding.

To put some flesh on this, the kind of duality I am talking about in this post is typified in physics by position-momentum or wave-particle duality. Basically, the structure of addition in flat space X is represented by waves f. Here f is expressed numerically as the momentum of the wave. But the allowed f themselves form a space X*, called ‘momentum space’. The key to the revolution of quantum mechanics was to think of X and X* as equally real, allowing Heisenberg to write down his famous Heisenberg commutation relations between position and momentum. They key was to stop thinking of waves as mere representations of a geometrical reality X but as elements in their own right of an equally real X*. The idea that physics should be position-momentum symmetric was proposed by the philosopher Max Born around the birth of quantum mechanics and is called Born Reciprocity. This in turn goes back (probably) to ideas of Ernst Mach.

What I am going to argue is that such a principle also seems to hold and is key to quantum gravity at a wider and more general level. We have seen this for quantum gravity in three spacetime dimensions last week but the principle as I have explained above is even deeper.

Well, its Christmas and I have go catch a plane, so let me leave you with a diagram from **On Space and Time** and an earlier 1991 paper* of mine. It is an overview of all self-dual mathematical structures in the sense above. You will see self-dual concepts (I mean that X* is the same type of object as X) along the central axis. It includes such things as flat spacetime. As soon as X becomes, for example, curved, which relates to gravity, you stray below the axis, while dual to this above the axis is quantum theory. The two are in a dual relationship in a representation sense.

For physics to be self-dual then you need both; as I see it Physical Reality splits into bits each consisting of sets of representations of the other, what I now call the ‘self-representing Universe’. Moreover, this philosophical postulate of a self-representing universe is not empty — it provides a constraint on the mathematical structure of the ultimate theory of physics. I showed in my PhD thesis many years ago that in the self-dual paradigm of quantum groups something like Einstein’s equations in a baby version can emerge.

And if it is the *only* constraint, then we will have shown that Physical Reality is dictated by a certain equity between ‘abstract structure’ and ‘experimental representation’, i.e. rooted in the assumptions of being a physicist. Strip away those assumptions, like realising that chess is only a game, and you transcend to a level of awareness in which the material physics world is like the reality of chess. I don’t say ‘illusion’ as in Budhism since as a scientist this material world is **the** thing of interest. But the philosophy that I get out of this obviously has much in common. I call it ‘Relative Realism’.

Still think that Religion and Quantum Gravity have absolutely nothing of worth to say to each other? Merry Christmas!

***Note:** *Shahn Majid has been talking around his paper *“*The Principle of Representation-theoretic Self-duality,*“* *Physics Essays* (1991) and the later parts of his chapter of ***On Space and Time.**

There is a tradition, starting I think with Edwin A. Abbott’s 1880 tale ‘Flatland’, where we suppose that we are not 3-dimensional beings but, let us say, ants, constrained to live forever on some two-dimensional surface. We tend to visualize a surface — imagine, say, the surface of a sphere or doughnut — within three dimensions, but don’t be fooled by that. That is just an aid to visualization. An ant crawling about on the surface, moving along ‘shortest paths’ (the analogue of a straight line on a flat space) could fully map out the geometry of the surface without ever leaving it.

I am speaking here of the spatial geometry. We will assume that time is a further linear dimension, making spacetime 3-dimensional, mapped out as the 2-dimensional surface evolves in time.

Actually, we won’t assume any of this, since as I explained in previous blogs, there is no evidence of an actual spacetime continuum of any dimension. But we will take it as a commonly accepted starting point and then I will explain carefully where we have to make the quantum leap to throw all that away to get to actual quantum gravity. This will also give you a bit of insight into the guts of the way that scientific revolutions work in practice.

Now, you may ask, in this day and age, where string theorists are happy to work in 10^{500} dimensions: what is so special about three? In any dimension the modern way of thinking about gravity is in terms of a ‘metric’. This is a gadget which at each point of spacetime allows one to compute the distance to all nearby points. It goes back to the 19th century mathematician Riemann and was used by Einstein. The mathematicians Cartan and Weyl found a different way of thinking about this in terms of a ‘frame’ and a ‘connection’ at any point of the spacetime. Their theory works in any dimension but I am going to cut straight to a very special answer only in three spacetime dimensions.

First, think about what you can do to a rigid object in three flat dimensions (in our case one of these is time but you will be able to visualise better if you dont worry about that). Well, you can move the object around and you can rotate it. Together these form a classical symmetry group E_3 of ‘translations and rotations’ in three dimensions. The same in a three dimensional **flat** spacetime.

What the Cartan-Weyl theory amounts to in three dimensions is to express the metric geometry of spacetime in terms of a gadget that assigns to every loop in spacetime an element of E_3. This is the translation and rotation matrix for transport around the loop. Let me just show you an example to get the idea. Referring to the figure, suppose you start at point P and keep your eye on an infinitesimal displacement dx (a differential in the language of two posts ago). Now suppose you move along the loop keeping the angle to the direction of motion fixed. Try it out on a globe at home; the arcs are supposed to be quarter great circles and P is the north pole. When you get back to P will you still be pointing the same way as the original dx? **Not necessarily** when the geometry is curved! Instead there will be some rotation’ that relates the result to the orignal dx, about 90 degrees in the example. This is how connections are handled in the Cartan-Weyl theory; the new twist is that frames can be viewed as enlarging a usual connection so that they assign not just rotations but ‘translations and rotations’ to each loop. This enlarged connection gadget encodes all of the geometry of our three-dimensional spacetime, including its curvature.

Now, in this new language, Einstein’s equations in a vacuum, that control how spacetime curves, is expressed simply as follows: the element of E_3 given by transporting round a loop does not change as you move the loop a little. And when there is matter, which for our purposes we idealise as perfect points in space (or lines in spacetime), the same is true as long as your curve does not cross one of these particles. From this, you can see that the connection gadget is fully specified by choosing a certain fixed number of elements of E_3; you have E_3 elements associated to loops that go around any of the point particles, and you have E_3 elements associated to loops that wrap around the surface that we are crawling around on. Even though the geometry of the surface may evolve in time its number of holes, i.e. its topology is assumed not to change. Under these technical assumptions the entire geometry and Einstein’s equations for gravity is now encoded in some fixed number of elements of the symmetry group E_3.

So much for classical gravity in three dimensions. Now we are ready for quantum gravity. In conventional physics there is an elaborate ‘machinery’ of quantisation. In fact there are two, the Feynman path integral and the Heisenberg algebra method (known in physics as canonical quantisation). Both of them are **fundamentally flawed** but for rather different reasons.

The Feynman path integral is based on the idea that classical mechanical systems are governed by a quantity called the ‘action’ (basically, it is like the energy) which is ‘minimized’ when the classical equations of motion are obeys. The quantum theory is defined by summing over all possible classical configurations with a weighting given by the classical action. This kind of works to first approximation more or less ‘by design’. It is very much like a statistician cooking up some random variables out of the average values that they *want* the random variables to obey. It is putting the cart before the horse: the classical continuum geometry should emerge out of some deeper principle for the quantum theory. I covered this point in my post on **religion and science**.

The Heisenberg algebra method consists of the idea that in classical mechanics one can carefully chose variables’ ‘coordinates’ in a special form similar to position and momentum in quantum mechanics. One then quantises the theory by replacing these classical variables by quantum ones obeying the famous Heisenberg relations. Personally, I think this is a bit more on the right track but I would have to say — why limit ourselves to the Heisenberg algebra? The honest answer is that until about 20 years ago this algebra (and perhaps the angular momentum algebra) was the only algebra familiar to physicists. In other words, mainly a lack of imagination.

This was exploded in the late 1980s with the arrival of **quantum symmetry** and quantum or **noncommutative geometry**. From the point of view of a quantum geometer like me, limiting ourselves to the Heisenberg algebra is analogous to assuming that the Earth is flat, whereas the actual possibilities mathematically are much richer.

In our case, there is a mathematically clear choice of quantum symmetry ‘quantum E_3’. So, how do we quantise quantum gravity in three spacetime dimensions? Just replace E_3 by the quantum version of E_3. Most classical symmetries have well-established quantum versions and this is one of them.** Hey, Presto! We are done and we have just quantized gravity in three dimensions with point-particles as matter. **Notice the critical point where we actually throw away all of the ‘continuum baggage’ and replace it by a deeper (quantum symmetry) principle. But we retain the ability to recover it in a special approximation where the quantum symmetry behaves almost classically, which is as it should be.

There is more. We know from astronomy that there seems to be some mysterious **dark energy** or ‘cosmological constant’ needed in our actual (albeit four dimensional) Universe. In the three dimensional model this arises very naturally as follows. At the level of classical gravity we must replace E_3 by a group SL_2 of 2 x 2 matrices of determinant 1. (The determinant of a 2 x 2 matrix is the product of the diagonal minus the product of the off- diagonal entries.) The difference between this and E_3 is that ‘translations’ are replaced by something a bit more complicated. But this too has a quantum symmetry version. Now, I mentioned in my post on **quantum symmetry** that the very concept involved a ‘arrow reversal’ duality in which one could reverse things, replacing the quantum symmetry by a ‘dual’ quantum symmetry. When you do this to the translation-like part of the quantum version of SL_2 you, more or less, **get an equivalent theory**.

In short, quantum gravity in three dimensions with cosmological constant or dark energy is **self dual** under a certain duality operation. This is lost if one worked with plain quantum gravity without the dark energy, i.e could explain why it is there. And this self-duality is, as I explain in my chapter of the multiauthored volume** On Space and Time**, a deep principle for quantum gravity. We had a first taste of it, at a speculative level, last week.

** Editor’s Note:** Shahn Majid has been talking around his research with B Schroers in “q-Deformation and semidualisation in 3D quantum gravity” arXiv:0806.2587.

So far in these posts we have looked at testable quantum gravity effects, but I have not said much about the ultimate theory of quantum gravity itself. There is a simple reason: I do not think we have a compelling theory yet.

Rather, I think that this deepest and most long-standing of all problems in fundamental physics still needs a revolutionary new idea or two for which we are still grasping. More revolutionary even than time-reversal. Far more revolutionary and imaginative than string theory. In this post I’ll take a personal shot at an idea — a new kind of duality principle that I think might ultimately relate gravity and information.

The idea that gravity and information **should** be intimately related, or if you like that information is not just a semantic construct but has a physical aspect is not new. It goes back at least some decades to works of Beckenstein and Hawking showing that black holes radiate energy due to quantum effects. As a result it was found that black holes could be viewed has having a certain entropy, proportional to the surface area of the black hole. Lets add to this a `black-holes no-hair theorem’ which says that black holes, within General Relativity, have no structure other than their total mass, spin and charge (in fact, surprisingly like an elementary particle).

What this means is that when something falls into a black hole all of its finer information content is lost forever. This is actually a bit misleading because in our perception of time far from the black hole the object never actually falls in but hovers forever at the edge (the event horizon) of the black hole. But lets gloss over that technicality. Simply put, then, black holes gobble up information and turn it into raw mass, spin and charge. This in turn suggests a kind of interplay between mass or gravity, and information. These are classical gravity or quantum particle arguments, not quantum gravity, but a true theory of quantum gravity should surely explain all this much more deeply.

Here then is an idea from a Physics Essays article I wrote some 20 years ago, and now covered in my chapter of **On Space and Time**. Let’s start with the very simplest theory of ‘physics’ — and I don’t mean Physics 101, I mean at the level of my 1 year old son. At the moment he is busy (I think) naming things, dividing the world into different classes of objects. Apples, pears, fire-engines … and learning about logical connectives such as ‘and’, ‘or’ and ‘not’. What I mean here, broadly, is the logical part of language as a description of the world, something which goes back to Aristotle, if not earlier. The theory consists of statements about objects falling into these various groups. And perhaps you have come across a useful ‘pictorial’ way of thinking about this called ‘**Venn diagrams**‘. Basically, fix some Universe of all possible things within some context, represented by box. Inside this box different groups of objects are represented as (possibly overlapping) regions.

Now, here is the neat bit. You can just as easily work with the complement of a region as the region itself. Thus if a certain region is called ‘apples’ then its complement ‘not apples’ is some other perfectly good region or concept. Although not formalised till many centuries later in the work of the British mathematician de Morgan, classical logic at this level has a deep symmetry in which speaking about a class of objects is equivalent to speaking about its complement:

“This is **not** an apple or an orange” is equivalent to “This is a not-apple and a not-orange.”

It would be quite cumbersome but we could systematically replace the concept of apple by the concept of not-apple in all of our language. In working with complements you have to interchange ‘or’ with ‘and’ (as in the example above) and ‘nothing’ with ‘everything’ (within some context) and so forth. It is a transformation of the whole way that we talk about the world. In the case of apple and oranges it’s obviously better to work with a ‘localised concept’ than with the delocalised ‘not-apple’ but what about adjectives such as ‘heavy’ where there is less of a bias? “This is not a heavy apple” is equivalent to “this is either not an apple or it is not heavy.”

Even among material things it might not be so clear cut. For example, spacetime is all pervasive and in geometry we tend to speak in terms of points where a common property of spacetime is **not** true, such as where the curvature is singular. So we could imagine an alien race of people who use language in a reversed way to us and at the level of Aristotle and logic, they will have a transformed but equivalent view of the world.

Clearly this complementation-symmetry of logic is lost in more advanced theories of physics. It was lost as soon as Newton discovered gravity, because real apples cause gravity (curve spacetime as we say these days) while not-apples need not.

** My proposal is that in quantum gravity something like this complementation symmetry should be restored.**

How **might** this work? As I explained in one of my early posts, in our modern understanding of gravity you cannot put more than a certain amount of mass into a given volume. If you try it just forms a black hole. If you put in more mass, it just gets bigger. From the point of view of our alien race this statement will appear to them as something else entirely. Remember that they think in terms of object which are the non-presence of our objects. So, they might say something like: a region of space cannot be completely empty of their matter (totally full of not-matter from their point of view). **This could be for them a limitation of their quantum theory**.

I explained in my post on dark energy that quantum theory suggests a ‘sea’ of virtual particles even in what is thought of as empty space and that a small `minimum energy density’ also seems to be observed by cosmologists. It is true that our theoretical understanding fails to match experiment by a factor of some 10^{123}, but I think this just reflects our total lack of understanding of quantum gravity. What I think we see is at least a hint that our gravity should appear in the alien race’s complementary language as something like their quantum theory and vice versa. That is also why one needs quantum gravity to hope fully to realize it. This reversal of viewpoint symmetry may also tie up with ideas about time-reversal.

While this idea remains vague and speculative, there is supporting evidence at the level of mathematical structure. In quantum theory the more appropriate kind of logic is ‘intuitionistic’. It means that we drop the law of logic that says that either something is true or its complement is true. So we do not have just the options for something to be an apple or a not-apple. Thus, in the famous thought experiment known as ‘Schroedinger’s cat,’ one has a cat in a sealed room that will be killed by release of a poison gas if a certain radioactive decay occurs. (This kind of thought-experiment would not be allowed today, by the way.) From our vantage point outside the room the cat should theoretically be in some kind of mixed quantum state neither dead nor alive until we look inside and see if the decay has taken place. In quantum theory, one can have (linear) combinations of complementary statements.

If this idea for quantum gravity is correct then the complementary concept, cointutionistic logic, in which one drops the law of classical logic that ‘a statement and its complement cannot both be true’, should be the first step in geometry (which, ultimately, becomes gravity). Indeed, in geometry a region **does** have an intersection with its complement, known as the boundary of the region. Going back to the Venn diagram, the perimeter of the regions have no structure in classical logic as there is no intersection of the set and its complement, but allowing it is arguably the birth of geometry.

There is in fact a school of category theorists led by mathematician F.W. Lawvere who have looked at ‘cointuitionistic logic’ as a conjectured ‘birth of geometry’. Incidentally, I met Lawvere once at a conference in 1991. He took me aside over lunch and told me that while he liked my conference talk, he asked whether I had read Lenin who, apparently, had proposed something related to my duality ideas. Its admittedly been on my `to read one day’ list since then.

And if appealing to top mathematicians does not convince you, how about Douglas Adams? Replacing a set by its complement becomes in probability theory the idea of replacing a probability p by 1-p. There are certain solid-state systems where such a transformation is a symmetry and I am proposing something similar in quantum gravity. But if you have read *Hitch-Hikers Guide to the Galaxy* (or better still, listened to the BBC radio script which is far more hilarious than the books or TV show), you will of course be reminded of the Heart of Gold with its ‘infinite improbability drive’ famously powered by reversing probabilities. Douglas Adams’ comic genius is not in dispute, but could he also have touched upon a truly deep idea for quantum gravity?

Only time will tell.

]]>After last week’s speculations on time I would like to ask an even deeper question: *why is there time*?

My 4 year old daughter would be proud. What I mean is, why do things evolve in the first place? It seems to me that fundamental physics has to answer not only ‘what’ questions but also ‘why’ questions if it claims to provide understanding. I think I have an answer, or a glimpse of one.

The answer has to do with quantum anomalies; no not the large (not very quantum, then) things that seem to turn up in every other episode of *Star Trek Voyager*, but what physicists mean by this, which I am afraid is much more dry and dusty. In fact, I’m going to have to ask you to dust off your high school calculus books, just for a minute.

I explained in a previous **post**** **that even if nobody at the moment knows how to reconcile quantum theory and gravity, quantum spacetime should emerge as an effect coming out of any unknown theory. Typically, the coordinates x,y,z of space would also be quantum variables, so space alone should typically form some kind of symbolic algebra. Due to quantum effects, the order of the variables in this algebra will matter, xy will typically not coincide with yx. One says that the algebra is ‘noncommutative’.

Now, what about differential calculus on such a quantum space? If you remember any high school calculus it means things like dx, dy, dz as the ‘infinitesimal differences’. Newton and Leibniz both considered such things as numbers which are then made arbitrarily small. Hands up if your high school calculus class contained a picture like the one shown at left. It defines differentiation of a function f in the x direction as a limit of the slope df/dx of the triangle as dx gets small.

So to develop quantum gravity effects in physics we also need ‘quantum differentials’ dx, dy, dz. They should enjoy the properties that differentials enjoy in Newtons theory except, since xy and yx need not coincide, similarly y dx need not coincide with dx y, etc. Now, here is the remarkable thing one finds as you dig deeper into this world of quantum geometry:

**Most sufficiently noncommutative ‘quantum spaces’ described by algebras with variables x,y,z do not admit any reasonable self-contained algebra of differentials dx,dy,dz.**

This is called an anomaly for quantum differentiation, or quantum anomaly for short. In physics when a classical symmetry does not survive the theoretical passage from our understanding at the level of classical mechanics to quantum mechanics, one speaks of an `anomaly’. An example of an anomaly is when physicists first tried string theory in the realistic four spacetime dimensions. Their theory had an anomaly for conformal symmetry and this was ‘fixed’ by changing the dimension to 24 and later to 11 or 10, though nowadays I hear that 10^{500} dimensions is currently quite popular. In that sense, **if** you subscribe to string theory, you have a ‘prediction’ or possible necessity of higher dimensions.

In my case what I mean by ‘reasonable’ is a differential calculus that respects in some form the symmetry that should rotate the x,y,z among themselves. Among such calculi there is an obstruction uncovered for many quantum spaces in my work with my colleague Edwin Beggs, a mathematician at Swansea in the UK.

As is typical for any anomaly, the thing to do is to increase the dimensions to absorb the obstruction. So if you subscribe to quantum spaces, you will typically be forced to invent at least one extra dimension which I will call ‘dt’. In fact you will be forced to invent time.

Let’s put some flesh on this. From the differentials dx, dy, dz, dt you can recover the corresponding ‘differentiation in direction x,y,z,t’ operations (the so-called partial derivatives), and you can do it algebraically without recourse to pictures. If you let f(x,y,z) be an element of our quantum space algebra, you find that its ‘differentiation in the t direction’ comes out not as zero but, in the simplest models, as something like the energy operator in Schroedinger’s equation.

So, not only are you forced to invent time, your original space variables obey an equation which in the limit of ordinary space (i.e. as you remove quantum gravity effects) becomes Schroedinger’s wave equation. **Oops**, you’ve just answered the question ‘why is there quantum mechanics?’ My 4 year old has not even gotten to that one.

And none of this is an accident. For any sufficiently ‘quantum’ space there are general mathematical reasons to expect that for any reasonable differential d there will exist an abstract differential element which I will call ‘dt’ obeying

dt f – f dt = L df

where L is our Planck scale parameter expressing the effects of quantum gravity. So, if t was not one of your variables you would be forced to invent it so as to have dt. Note that **both sides of the equation are zero in ordinary geometry**. In other words, this equation is invisible until you study quantum gravity, which is why such an origin of time is not seen in ordinary classical and quantum physics.

*Editors note:* **Shahn Majid** was talking around his paper “Noncommutative model with spontaneous time generation and Planckian-bound”, Journal of Mathematical Physics, 2005. A general introduction to his ideas on quantum spacetime appear in **On Space and Time**.

First off, congratulations America! Electing the first black US president has to be significant and already puts Obama into the history books, whatever economic and other problems may loom worryingly in the future. Certainly his work will be cut out for him given the falls in the stock market and some of the dire predictions going forward.

Maybe in such times of history, change and future uncertainty, it is appropriate to reflect, then, what exactly do we mean by past, present and future? What is the cutting edge of modern physics telling us about these important concepts?

So far in these blogs I have focussed on hard science verifiable by experiment. But it is also part of the background to my multiauthored volume **On Space and Time** that to proceed further with fundamental science may need revolutionary new ideas for which science is still grasping. So this week we are going to let our hair down and extrapolate from what is understood into what is definitely, well, speculative.

Incidentally, I did run these ideas here past a BBC producer for ** Horizon** a few years ago when he called me asking about the possibility of time travel, and obviously I was not controversial

What I propose, as a motion for debate, is:

**The direction of time is a spontaneously broken symmetry, in the same way as which side of the road to drive on is a spontaneously broken symmetry.**

Let me explain the analogy first. For the sake of argument, let’s say that either driving on the left or driving on the right is equally good. At some point, with enough drivers crowding the road, you have to break the symmetry and decide somewhat arbitrarily (‘spontaneously’) on one side or the other. But once enough of you have bought right handed cars and started driving them, you are pretty much locked into that choice in your region.

Now for the arrow of time. This is not controversial at a subatomic level and at the level of fundamental equations of physics; there is a symmetry between, say, t and -t in the eqations i.e. between increasing and decreasing time. For example, the relativistic wave equation that governs the simplest particles involves (d/dt)^{2} which does not change under such a change of variables. In physics the actual symmetry is PCT — it means left-right reversal (“parity”), particle-antiparticle interchange (“charge conjugation”) and time-reversal. This is what led the legendary physicist Richard Feynman once to say that in his view a positron (an anti-electron) is just an electron traveling backwards in time.

So, you could view physics with a reversed arrow of time relative to everyone else (looking backwards so what you call time increasing corresponds to everyone else’s time decreasing) and this would be OK with subatomic physics as long as you also flip particles with antiparticles and left with right. The equations would not know, it would be a matter of convention and your conventions would be related to usual ones by these flips.

What is controversial is extending this to macroscopic physics. Let’s try, and you will see why. I am saying that there is a symmetry between, say, being a historian and being an economist (by which I mean broadly predicting the future in similar terms to modern historians, not just the stock market but governments, social trends etc.). Both take the world as it is today and extrapolate — backwards or forwards according to models of how the world works, to the past or future respectively. So if there was some other part of the Universe (another ‘region’ in the driving analogy) where people used the reversed convention on the arrow of time, their historians would be our economists in the sense above. Let’s call this, for the sake of discussion, time-reversed world, or **TR world**. It need not be an actual other world but just a reversed world-view.

This is not a problem with reversible classical mechanical models of evolution of the world. It is also not a problem for (unitary) evolution of the quantum state in quantum theory but there might be problems when you make quantum measurements. In quantum theory when you measure something the quantum state ‘collapses’ to the result of the measurement; information about the range of possibilities and their probabilities prior to measurement is lost. I think this is a red herring. Historians make use of probabilistic models just as well as economists, i.e. saying:

‘Given what we know now, its 99% certain that Caesar visited Gaul in the year 56 BCE.’

That sort of thing. Notice that the ‘arrow of time’ in the use of the probability here is past-pointing.

From a physicists point of view the main objection is the second law of thermodynamics, that entropy always increases. My view, however, is that this is ultimately **not** actually a fundamental law of nature. Rather, I think of it as a tautology about the way that we define and use probability. Thus, if you view probability as quantifying what **will** happen given what you know now, then you already built in an arrow of time into the very notion of probability and into probabilisitic concepts such as entropy. If, as just discussed, you reverse your usage then you will also be using these terms differently. A historian’s state of knowledge gets more and more uncertain as you go further back in time.

The real problem is that the arrow of time is so built into everything we do the moment that we communicate and share information — into the very concepts that we use — that tracing through all of the details of the reversed interpretation, fleshing out the dictionary between our usual way of speaking and the reversed way, is an immense and almost unimaginable task. It would be akin to creating a new and unfamilar langauge and way of looking at the world, but would be harder because every bit of science and not just every day life has to go into the dictionary. It is certainly much harder than converting your driving mentality from left to right as you go from the UK to the US.

The deepest part of the problem here is, well, how we speak about the notion of reality itself. Clearly, the past is somehow real, fixed, while the future is not yet written. Isn’t this where the historians-economists symmetry surely fails? Just because historians don’t know the past for certain does not mean that the past does not absolutely exist. I agree with that statement. But the thing is that the equations of physics are generally a-temporal, i.e. one looks down on the whole spacetime continuum past and future so from that perspective the future is also ‘real’. Free will and such matters are not really understood in physics, although some would say that they should be one day (one can point at the ‘measurement problem’ in quantum theory as providing a hint). This is a point that John Polkinghorne makes in his section of **On Space and Time**, that we do not yet have but do need a ‘theory of time unfolding’. Until then, the best we can say is that what for us is the actual past would for the people of TR world be the uncertain future, while what for us is uncertain would be fixed for them even if their knowledge of it to their historians was as murky as to our economists. It is ultimately a philosophical point as to what ‘exists’ really means.

To see some of the problems for scientists to define this ‘present’ where the solid past becomes the uncertain future, consider that someone zooming past you at high velocity experiences a different ‘now’ than you do. A standard illustration of this is the ‘pole in the barn’ paradox. Perhaps the reader will know that fast moving objects also shrink (this is called Lorentz contraction). So imagine a runner with a 20 foot pole going so fast that it appears to us as 10ft. Imagine is passes through a barn that is 19ft long and has doors at each end, and when the pole is inside we briefly shut both the doors. So the pole is momentarily enclosed in the barn. But from the point of view of the person running with the pole, the pole is not moving, so it is 20ft long and the barn is zooming towards them and is shrunk to 9.5ft. Clearly, the pole can at no instant fit entirely in the barn! The only way out is that what appears to us on the ground as closing the doors simultaneously appears from the point of view of the person moving with the pole as first one door closing for an instant and then the other door closing for another instant. The notion of ‘now’ is therefore ill defined. This is an instance of Einstein’s Special Relativity in action and one of the surprises is that it does not matter that much to physics; one can still have a notion of cause and effect without a universally agreed ‘now.’

So, can we have time travel? Over the years, there have been several fictional works about meeting someone traveling backwards relative to us. *The Time Traveler’s Wife* by Audrey Niffenegger is a recent one, while earlier efforts included stories by John Wyndham and by Brian Aldiss. I have not read any of these myself but I suppose that the traveler or the traveler’s consciousness travels back in jumps but is then aligned with our own arrow of time moving forward before the next jump. This is obviously wrong. What would it be like to truly meet someone from TR world? In view of what we have said above it would be much more serious even than recording what they said and playing it backwards. Our very notions of what it was for them or us to **be** would be different and need to be part of the dictionary.

**But I can show you how it might work at the subatomic level.**

Reading the figure from the bottom with time going ‘up’, we have at A a very high energy photon (a gamma ray) turning into an electron-positron pair (the paths marked e_- and e_+ respectively). These propagate and, perhaps in an electromagnetic field depicted by interacting with more photons, bend round and happen to recombine back into a gamma ray. It could happen, with low probability. In TR world the same series of events would be read backwards from the top of the page and I’ve done the diagram in such a way that they would see the same thing, just with the roles of electron and positron swapped. But a third way, remembering what Feynman said, would be to say that an electron appeared out of no-where at A, absorbing a photon of light, travelled to the upper part of the diagram and then disappeared in a flash of light at B, to travel back in time to the bottom of the diagram where it appeared at A as the electron we started with.

How to extend such subatomic ideas to macroscopic physics remains a mystery. We got a glimpse of how it might work thinking about history v economics, but fundamental gaps remain. But what I find fascinating is that some of it could be viable scientific research, i.e. proceeding in an incremental manner from the subatomic end up through the different layers of science, if only to see exactly where the symmetry goes wrong. I have a hunch that we would learn a lot about ourselves in the process.

]]>**Shahn Majid** discusses how the notion of quantum symmetry coming out of modern ideas on space and time could provide clues to the workings of a truly quantum computer.

Have you ever sat through a really boring flow chart presentation and to pass the time found yourself wondering the following: See the way that flow chart arrow crosses that other flow chart arrow:

**Does it matter whether the arrow ‘passes under’ the other arrow or ‘jumps over’ it?**

If you are an engineer you could ponder the same question for a schematic for the wiring of a computer. In fact you could ponder the question when actually building a computer: does it matter if this wire connecting to that chip jumps over or under this other wire? If you thought it did matter, you would have discovered quantum computers as well as quantum symmetry!

*Nice work.*

Let me start with the symmetry. Truth, symmetry, beauty! The cornerstones of mathematics, some would argue of the very concept of knowledge. Surely, nothing could be deeper or more self-evident than the notion of symmetry — of finding patterns. But what if our usual conception of symmetry was not quite right? As scientists we should not be afraid to question even the most basic of assumptions. After all, Nature does not know or care what maths is in maths books, and maybe Nature is just a lot more imaginative than anything we have so far thought of.

Well, I do think that the usual notion of symmetry is not quite right and for reasons tied up with space and time, the topic of my multi-authored book **On Space and Time**. The point is that spacetime itself appears to be **quantum**. As part of that, we can expect that space is also typically quantum. Now, the notion of symmetry grew out of things like reflections or rotations of objects in *space*. Patterns are usually patterns in some space. This is by no means exclusive but at least for basic examples like these, the notion of symmetry therefore needs to be correspondingly quantum. We need a new concept, **quantum symmetry**.

What is a quantum symmetry then? The current thinking is the following. Normally symmetry transformations can be composed, so we should have a ‘product’ into which the data for two symmetry operations are fed, say at the top of the ‘box’ and the composed result comes out at the bottom. We are taking a ‘flow chart’ view of mathematics here. So we have boxes or ‘gates’ that compose or multiply.

What is new and most unexpected is another operation which has one leg coming in at the top and *two* legs coming out at the bottom. This is called a ‘comultiplication’. For ordinary (not quantum) symmetries such a box is taken for granted and you don’t even notice it: when in a flow chart or calculation you use the same data twice, you just duplicate it. Such a duplication or ‘xeroxing box’ has information coming in and two copies of it coming out. A quantum symmetry allows more complicated comultiplications subject to rules of compatibility with the multiplication. Its a neat idea and it works — it allows for example to describe the quantum version of Einstein’s special relativity as a quantum symmetry of the quantum spacetime from last week’s post. But once the genie is out of the box, the idea of a quantum symmetry is very powerful and turns up all over the place.

How does this all relate to quantum computers? In a usual computer you also have boxes — silicon chips — processing data. In a quantum computer the idea is to replace the digital — 0 and 1 — data by quantum state data. So analog or vector data flows along the wires of the flow chart, if you like. It turns out that the superposition of classical data allowed by this makes such quantum computers fundamentally faster in principle than ordinary ones. Many groups in the world are actively trying to build elements of quantum computers, for example the NEC gate announced in May last year.

But what about the semantics or logic of a quantum computer? Just as a digital computer has as building blocks ‘and’ and ‘or’ gates of classical logic, which are tied to basic ideas of symmetry and truth, we should expect that the building blocks for a quantum computer should be elementary quantum symmetries or at least intimately tied to them. So you could imagine pairs of boxes of a quantum symmetry as the basic gates. If this is so, then a feature of many quantum symmetries is that when combining them it does actually matter whether a wire goes under or over another. I am glossing over some issues — these are technically called braided quantum symmetries but they are closely related to usual quantum symmetries.

It might take while to make functioning gates, but it seems to me that the next problem is then concatenating gates. What is the analogue of a `copper wire’? How do you ‘transport’ quantum information? In the NEC gate, for example, the gate is charged by an external microwave pulse. All very well, but you need to be able to feed the output of one gate into the input of the other without external equipment. This is a much tougher problem and I suspect the answer is that, well, you just don’t have any wiring!

Years ago, there was an ancient SciFi series ‘Space 1999’ where the computers were amusingly made of translucent blocks that you could stack up one on the other. So information is conveyed by juxtaposition. I would argue that this is a much better model for a quantum computer. Going back to our flow chart at the start of the post, it means that the arrow crossings themselves have to be boxes with two legs coming in and two legs coming out. Actually we need two such boxes, for whether the left hand information flows under or over the right hand information. We have a fair idea from the theory of quantum symmetry how these ‘cross-over’ gates should behave. Then, once built, we could compress up all our computational flow charts into touching boxes. Now more wires! I guess I am predicting that this is how quantum computers will actually be built one day, perhaps next century.

Back in the here and now, in fact some 20 years ago, there was a revolution in knot theory in which Vaughan Jones, for which he got a Fields medal, figured out how to associate a function to a picture of a knot as a way to tell if it is knotted.

Jones’ method can be viewed as an example of a quantum computer closely related to the first true examples of quantum symmetry. One reads the knot from top to bottom with a cross-over box each time one part of the knot crosses over the other. The kinds of quantum computers that I have been talking about are similarly ‘topological’ and can be represented by knots with additional boxes or nodes on them and additional strings coming into and flowing out of such nodes, as we discussed.

There is one more thing I would like to leave you with. Notice that quantum symmetry restores a kind of ‘input-output symmetry’ since as well as being able to combine or ‘multiply’ information you can also uncombine or ‘comultiply’ it. All the rules are symmetric between the two. It means that you can take a computation with quantum symmetries as flow charts, turn the chart upside down, and you have another valid computation with the roles of multiplication and comultiplication flipped. If such quantum symmetry ‘duality’ ideas are fully manifested in quantum gravity they would, I argue in my chapter of the book, be related to a deep duality between quantum spacetime and gravity or between the macro world and the micro world. This also suggests some deep ideas about reversing time, which the small margins of this blog are too small to contain (to paraphrase Fermat)…

]]>**Shahn Majid** explains why this may be.

In these posts I have emphasized ideas on the cutting edge of fundamental science which have testable predictions or other contact with experiment, rather than being merely fashionable. Now, up until recently it was widely assumed that ideas for the ‘Mount Everest’ challenge of quantum gravity, as **Martin Rees** puts it in his review of the multiauthored book **On Space and Time**, could never be tested experimentally.

Accordingly, theoretical physicists in the last two decades have often given up on serious experimental contact and based their ideas on fashion or ‘elegance’. This, unfortunately, is not by itself a reliable indicator as it rather depends on what maths you are familiar with, something which tends to be rather hit and miss in the theoretical physics community. I consequently agree with Martin Rees that we are nowhere near the ‘summit’ as it were.

For example, I remember at the turn of the millennium waking up to a respectable BBC radio chat show, I believe it was *In Our Time*, in which a string theorist explained that string theory tries to unify quantum theory and gravity. When asked what was the evidence for string theory, the individual replied “well, there is evidence for quantum theory and there is evidence for gravity, so there is evidence for string theory.”

This was pretty shocking for me and for most of my colleagues (including my string theory colleagues) because a theory has to be judged by how it goes beyond what is known, not by the mere wish to succeed. It’s no doubt tough being on the radio and probably the interviewee was trying too hard to oversimplify, but it illustrates the problem. I should say that I am not against string theory per se, though I do agree with those who say that it should be judged in perspective and not to the exclusion of other approaches.

How can we return to experiment, as we surely must to make genuine progress in quantum gravity? In my own chapter of **On Space and Time**, I explain that one can make certain quantum gravity predictions **without** knowing quantum gravity and without pretending to have a theory of everything at all. The idea is shown in the diagram:

Thus, quantum gravity, whatever it is, must in some limiting approximation, recover ordinary gravity in the form of curved space time. Now, as we step away from this limit we must have corrections to geometry that begin, for the first time, to include quantum effects. The problem with this is that you need first of all to have new mathematics — a new more general notion of geometry itself — within which to cast such possible effects. You can’t describe effects involving colour if you can only see in black and white. So what we need in the first instance is a new more imaginative conception of geometry itself.

We have seen last week how a new more general notion of ‘noncommutative’ or ‘quantum’ geometry based on symbolic algebra could revolutionise our understanding of subatomic physics. This week I want to explain how geometry as symbolic algebra could modify physics in a manner detectable at astronomical scales. The algebra in question has symbols * x, y, z, *and

…where L is potentially a constant of Nature. If the effect is due to quantum gravity we might expect L=10^{-44} seconds for reasons explained a few posts ago. Here * i* is the square root of -1 as last week, and

The idea here is that unknown variables *x*, *y*, *z*, and *t* that describe the location and time of an event are no longer numbers since numbers could never obey equations such as the above. They are abstract symbols which could, however, be realised concretely as matrices much as in quantum mechanics. Here are some predictions:

**1. A particle or an event cannot be exactly located in both space and time.**

So if you look at your watch and then immediately look where you are, you will typically get a different answer than if you had first looked where you are and then at your watch. The error is not great, maybe about 0.00000000000000000000000000000000000000000001 seconds. This is a different from Heisenberg’s famous uncertainty principle but has a similar flavour.

**2. The speed of light depends a bit on the energy.**

Blue light travels just a very very tiny bit slower than red light. This is in sharp contract to the cornerstone of Einstein’s theory that the speed of light is a fundamental constant.

There has been speculation that the first effect might be detected as increased ‘noise’ in certain highly sensitive experiments that could be done today. But let’s look at the second effect. Some time next year NASA will launch LISA, an interferometer designed to detect gravitational waves in space. This instrument is so sensitive that it could in principle be retooled to also test for the variable speed of light effect. I’m told that this would only cost a few million euro, compared to the 5 billion spent on the LHC. Another experiment that IS being done uses the NASA FERMI (GLAST) satellite which went up earlier this year.

The original acronym stands for gamma-ray large area space telescope; it detects gamma rays — a very energetic form of light — that are created in massive bursts often on the other side of the Universe. These create a spread of energies and according to the theory the more energetic ones would arrive a little more slowly. When you put the numbers in, the difference is about a 1/1000 th of a second which is quite reasonable. Life is not quite as easy as that, however, as we have no control over these bursts on the other side of the Universe, so this effect if present would show up only after statistical analysis of a great many such measurements. Collection of such ‘time of flight’ data is part of the mission protocol and could either confirm or disprove the theory.

If quantum spacetime is confirmed in this way, it would be a huge discovery in a certain sense dual (I may try to explain this next week) to the discovery of gravity itself. Clearly, these are exciting times for fundamental physics!

]]>

Some of Fields medalist **Alain Connes**‘ revolutionary ideas shed light on how to understand the ‘zoo’ of elementary particles thrown up by accelerators like the **LHC**. If Connes is right, the key to the fundamental nature of matter lies in **graffiti carved on a bridge in Dublin in 1843**.

The graffiti was carved by this man **William Rowan Hamilton** on Brougham bridge as the ebullient mathematician was passing on a walk with his wife. According to a plaque there, it read:

I am in Dublin later this week and will be taking my camera.

So how does this answer the mysteries of the Universe? According to Alain Connes in his chapter of the multiauthored volume **On Space and Time**, spacetime indeed has ‘extra dimensions’ but these extra dimensions are **not** those of any usual kind geometry (curled up or whatever as in string theory) but something far more imaginative; they are given mainly by a symbolic algebra defined by this graffiti.

Connes is a Fields medalist, which is like a Nobel Prize for mathematicians (who were left out in the will of Alfred Nobel. Incidentally, there is no merit to the popular myth that this was because of an affair between his wife and a mathematician; he never married). So what that means is that the actual mathematics behind his theory is very deep and very advanced; I’ll only be touching on the easier parts in this post.

Well, lets get the maths over with. The main idea we need is that **geometry is algebra**.

For example, one can think of a circle as defined concretely by an equation x^{2}+y^{2}=1 in terms unknown variables x,y. The solutions to this equation among real numbers form a circle. Similarly, working with an algebra of symbols x,y,z with the same properties for the symbols as ordinary numbers would have and with and additional equation x^{2}+y^{2}+z^{2}=1 is equivalent to working with a sphere. Now, the first thing to note about the graffiti is that the symbol i, whatever it is, is the square root of -1. Of course, -1 does not have a square root among ordinary (real) numbers since the product of any such number with itself is positive. But in high school we learn that we can just add in such an ‘imaginary number’ with symbol i to obtain the ‘complex numbers’. They are used throughout the world by engineers and scientists. What Hamilton did was to throw in two more such symbols j and k. But the last relation ijk=-1 connects them. This looks innocent but after a little high school algebra you can deduce that ij=-ji.

**Oops.** This is impossible for ordinary numbers: 3 x 5 = 5 x 3 and similarly for any two ordinary numbers (one says that ordinary numbers ‘commute’). So there could never be any usual kind of geometry described by variables i,j,k and the relations of the graffiti. This is why you need to be much more imaginative and imagine such a geometry even though it does not exist in any usual sense. You cannot visualise it as you can a sphere but you can work with the algebra — the quaternion algebra — as if a geometry did exist. It’s an example of a more general notion of ‘*noncommutative geometry*‘ as Connes puts it (or ‘quantum geometry’ is another term that I like).

So, let’s take a copy of this particular noncommutative geometry, i.e a copy of Hamilton’s quaternions. Let’s extend the ordinary spacetime continuum so that at each point in spacetime you have sitting there these imaginary ‘extra dimensions’. They aren’t curled up, they cant be visualised in the usual way at all, but they are there. So what? Well, Connes now looks at the equations for wave-particles moving in such an extended spacetime and with the symbols i,j,k (and a bit more of the extension that I have glossed over) realised concretely as matrices much as in quantum mechanics. He and his collaborators find that each such wave-particle would appear to us as a collection of ordinary wave-particles **exactly matching** a good part of the zoo of particles found in particle accelerators. Is this a coincidence?

This brings me to what is ultimately a philosophical point. What does it mean to **understand** something in fundamental science? The point is that when you collide protons in a particle accelerator you get, in the collision fragments, a mess of all different kinds of particles — electrons, muons, tau particles, three flavours of neutrinos, various mesons and hadrons built from up, down, charm, strange. top and bottom quarks, and particles conveying three fundamental forces (not counting gravity). But you also get a **theoretical mess**. The well-established Standard Model of particle physics boils this down to some extent by identifying certain symmetries among the particles or if you like grouping them together. But as Connes likes to point out, it still takes a page of complex and disconnected formulae and more than two dozen unexplained parameters to write down this standard model. The reason its called the ‘standard model’ is because it’s survived more or less unchanged since the 1960s. So the philosophical question is: **is nature messy** or should particle physicists **try harder**? Should we be content as zoologists naming and classifying or should be look for a deeper and simpler layer of science as explanation?

It is certainly true that physicists made some attempts to further ‘unify’ their understanding of particle physics (so called grand-unified theories) but many of them gave up in the 1980s and 1990s to spend time on more fancy things. It means that we are no closer to understanding basic things like why electrons and protons etc weigh what they do or why elementary particles fall into three families as they do, among many other mysteries. But put in the right pure maths and it all **starts** to make sense. At the moment Connes’ theory is not accepted in mainstream physics, but this could change if his current prediction of 168GeV for the mass of the Higgs particle is vindicated in the LHC.

More important for me is that this is a new ** geometrical** point of view on particle physics that is much more economical and constrained than conventional extra dimensions — if we fully understand that geometry (which Connes encodes as a ‘Dirac operator’ but which you could think of as such things as curvature or gravity partly in these ‘noncommutative’ extra dimensions) then we have a route to understanding all the parameters and structure of the zoo completely.

Past lives and life after death are paltry matters compared to Roger Penrose’s latest ideas about the origin and fate of the Universe itself. In his chapter of the multi-authored volume **On Space and Time**, Penrose argues that certain types of information could be carried over from a previous Universe through the ‘big bang’ into the present Universe, and likewise information could proceed to the infinite cold dark future of our Universe to be carried over into the next.

This is bold stuff and in this week’s post I’d like to give some idea of what is involved. Roger would be able to do it far better himself of course.

The first ingredient is one that many popular science readers will be familiar with — the idea of time dilation. You may know that if you travel in a train at high velocity then you actually experience time more slowly relative to a person on the ground. This is not too noticeable on the Eurostar train, as you have to be close to the speed of light for the effect to be significant (I make it about 0.2 nanoseconds less time experienced on the trip London to Paris).

But, as Penrose observes, for a photon of light itself, time is so stretched that it experiences no time at all! So it is that a photon could traverse the 10 billion year history of the Universe and quite easily carry information to its infinite future and, according to Penrose, beyond it.

Next, many readers will also know that these days gravity is expressed as the curvature of spacetime. The usual way to visualise this is to think of an ant moving about on a two-dimensional surface, which could be bent this way or that. Some of what you see is an artefact of the visualisation but some of it is intrinsic to the surface and determines such things as the shortest path between two points and the length of that path. It’s an analogy for our 4-dimensional spacetime where we don’t have the luxury of being able to `step outside it’ as we do when we look down on the ant.

Now, suppose spacetime were to be stretched or distorted in such a way that relative angles and shapes were preserved, even if distances were not. The thing that Penrose observes is that light and other massless particles are necessarily insensitive to such ‘conformal rescalings’. If the world was only made up of such things, there would be no way to detect such a rescaling!

**So what?**

Well, conformal rescalings can be used to `scale back’ the infinite future of the Universe so that it appears as a finite boundary. As Penrose likes to point out, this was anticipated in some of the works of M.C. Escher, notably in his ‘Circle limit’ series. See how the shapes are retained as an entire infinite (hyperbolic) plane of them is squashed into a finite circle. Likewise, one can use conformal rescaling to `blow up’ the point in time which is the putative `big-bang’ creation of the Universe into a finite boundary. So far these are just mathematical tools. But Penrose can now put forward his hypothesis that the boundary in the far future can be identified with the boundary at the big bang of a subsequent Universe and that the boundary at the big-bang of our Universe can be identified with the far future of a previous one!

‘Identified’ here means up to conformal rescalings, i.e. it means in so far as observable by massless wave-particles such as photons and gravitational radiation. Even more, the information carried by such fields could propagate right through the boundary from one Universe to the next, making this theory *in principle* testable. I am oversimplifying quite a bit here — the identification at the boundary takes the form of a certain `Weyl curvature hypothesis’ which Penrose proposes (and which in its current form builds on the work of his Oxford colleague Paul Tod). Anyhow, such things as colliding black holes the size of galaxies and galactic clusters, in the far future of the **previous** Universe to ours, would produce huge amounts of gravitational waves which would connect through into our Universe as a particular pattern of inhomogeneities that we might be able to detect.

There are a lot of immediate questions raised by such a bold proposal and Penrose addresses them carefully and in depth. Probably the most fundamental is how could entropy always increase if Universes are repeating in this way? Penrose argues that one must take into account the entropy of gravity or of spacetime itself, so to speak, as expressed in part of its curvature. Although Penrose’s treatment does not yet include quantum gravity effects at the big bang and although the experimental predictions have yet to be developed in detail, this is undoubtedly a bold and fascinating proposal.

]]>Last week I explained what I argue to be the greatest **theoretical** challenge facing fundamental physics today; that the very concept of the spacetime continuum is flawed and in need of revision. This week I want to explain what I think is the very greatest challenge coming from the **experimental** and observational side. Science thrives on a dialogue between theory and experiment and when you put all this together you arrive, as I see it, at the most exciting time for theoretical physics for a century, perhaps even since the 17th century in terms of the expected level of shake-up.

The experiments and observations that I refer to do not relate to the Large Hardon Collider. While that should be interesting especially if they **don’t** find the Higgs particle … well the LHC is now broken for a few months and that gives us a chance to see what else is going on. What is going on is the possibility of testing physics at the Planck scale, i.e. at energies 10 million billion times greater than the LHC could ever produce. It’s a brand new field, hitherto considered by physicists completely impossible, called ‘quantum gravity phenomenology’.

Don’t worry, we won’t actually be producing energies that high on Earth in the near future, we will be turning to cosmology. But the energies available if we knew quantum gravity *could* be rather high. If you watch the SciFi Channel series Stargate Atlantis, the portal device is powered by a ‘zero point module’ that taps into the vacuum energy of completely empty space. I think it was Arthur C. Clarke who first brought this into fiction, but it was based on theoretical ideas at the time. One can give a simplistic estimate of this vacuum energy based on cutting off particle wavelengths at the ‘minimum wavelength’ of 10³³ cm and the size of the Universe. I do this in **On Space and Time** and it comes out naively as about 10^{94} grams of mass-energy per cubic centimetre of empty space. To put this in perspective, this is about 10^{88 }(i.e.10,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000) times the energy consumption of the world in a year, in each cubic centimetre! You can think roughly of a kind of ‘sea’ with the same density as Planck-scale quantum-black hole objects (as featured in my first post two weeks ago) perhaps making up the foam-like structure of spacetime which, at a distance, we see roughly as a continuum. But please don’t think take this too literally. This is more like a ‘sea’ of quantum fluctuations and may well be a theoretical artefact of the way we think about quantum mechanics.

Now the funny thing is that astronomers in the last two decades have firmed up their picture of the Universe on a large scale and have exactly concluded the apparent existence of some kind of ‘vacuum energy’ uniformly filling out space. This has never been seen directly but is deduced from a careful look at the curvature and expansion rate of the Universe and the average energy density needed to explain it. As it has never been seen, it is called ‘dark energy’, but it seems to be there. So on the one hand theory is naively predicting a vacuum energy density throughout space of 10^{94} grams per cubic centimetre while astronomical observations are giving us an actual density of … er … 10^{-29} grams per cubic centimetre! So, the theory is wrong compared to experiment by a factor of 10^{123}, i.e. 1 with 123 zeros after it! There is not even a name for such a big number. The fact that back-of-envelope theoretical estimates are so badly off from what is observed tells us that there is a lot going on that we do not even remotely understand. This is called the ‘problem of the cosmological constant’ or the ‘dark energy problem’ and is probably the greatest challenge for physics today. We just dont have a clue how to get the experimentally observed answer from any theory other than by ‘fine tuning’ or fudging the answer in an unexplained manner. Maybe, as some philosophers argue, there IS no explanation. But I rather think that it’s a signal of a pending revolution in physics.

While on dark subjects, astronomers have also found that while their mysterious dark energy makes up about 70% of the mass density of the universe, a further some 25% is made up of equally unseen matter of a particle nature, called ‘dark matter’. The image (**see above**; it is also the image used in the cover of the book) is a galactic cluster showing a fog of dark matter as deduced by astronomical observations and projected onto the image by a computer (you won’t actually see this dark matter if you look). You can read more about it here.

Things are being discovered about dark matter all the time. But no one has a clue what it’s actually made of. Not even a dust of stable black-hole remnants seems to fit the bill as the density of such things from all currently known mechanisms, at any rate, is not high enough to account for 25% of the matter in the Universe.

So, does all this worry you or does it excite you? Only 4-5% of the mass energy of the universe as deduced by astronomers from its gravitational effects, is explained by modern science! Some 70% is in some unexplained form of energy and some 25% in some unexplained form of matter. It should excite you as it means that that the world is actually as **full of mystery** and ripe for scientific revolution today as it was for Newton discovering gravity as an explanation of Kepler’s laws for planetary motion or as it was at the birth of quantum mechanics at the start of the last century. Meanwhile, if anyone tells you they are close to a ‘theory of everything’ you should be skeptical if they are not also close to addressing these mysteries.