Sunday, October 23, 2016

The concordance model strikes back

Two weeks ago, I summarized a recent paper by McGaugh et al who reported a correlation in galactic structures. The researchers studied a data-set with the rotation curves of 153 galaxies and showed that the gravitational acceleration inferred from the rotational velocity (including dark matter), gobs, is strongly correlated to the gravitational acceleration from the normal matter (stars and gas), gbar.

Figure from arXiv:1609.05917 [astro-ph.GA] 

This isn’t actually new data or a new correlation, but a new way to look at correlations in previously available data.

The authors of the paper were very careful not to jump to conclusions from their results, but merely stated that this correlation requires some explanation. That galactic rotation curves have surprising regularities, however, has been evidence in favor of modified gravity for two decades, so the implication was clear: Here is something that the concordance model might have trouble explaining.

As I remarked in my previous blogpost, while the correlation does seem to be strong, it would be good to see the results of a simulation with the concordance model that describes dark matter, as usual, as a pressureless, cold fluid. In this case too one would expect there to be some relation. Normal matter forms galaxies in the gravitational potentials previously created by dark matter, so the two components should have some correlation with each other. The question is how much.

Just the other day, a new paper appeared on the arxiv, which looked at exactly this. The authors of the new paper analyzed the result of a specific numerical simulation within the concordance model. And they find that the correlation in this simulated sample is actually stronger than the observed one!

Figure from arXiv:1610.06183 [astro-ph.GA]

Moreover, they also demonstrate that in the concordance model, the slope of the best-fit curve should depend on the galaxies’ redshift (z), ie the age of the galaxy. This would be a way to test which explanation is correct.

Figure from arXiv:1610.06183 [astro-ph.GA]

I am not familiar with the specific numerical code that the authors use and hence I am not sure what to make of this. It’s been known for a long time that the concordance model has difficulties getting structures on galactic size right, especially galactic cores, and so it isn’t clear to me just how many parameters this model uses to work right. If the parameters were previously chosen so as to match observations already, then this result is hardly surprising.

McGaugh, one of the authors of the first paper, has already offered some comments (ht Yves). He notes that the sample size of the galaxies in the simulation is small, which might at least partly account for the small scatter. He also expresses himself skeptical of the results: “It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.”

I am somewhat puzzled by this result because, as I mentioned above, the correlation in the McGaugh paper is based on previously known correlations, such as the brightness-velocity relation which, to my knowledge, hadn’t been explained by the concordance model. So I would find it surprising should the results of the new paper hold up. I’m sure we’ll hear more about this in the soon future.

Wednesday, October 19, 2016

Dear Dr B: Where does dark energy come from and what’s it made of?

“As the universe expands and dark energy remains constant (negative pressure) then where does the ever increasing amount of dark energy come from? Is this genuinely creating something from nothing (bit of lay man’s hype here), do conservation laws not apply? Puzzled over this for ages now.”
-- pete best
“When speaking of the Einstein equation, is it the case that the contribution of dark matter is always included in the stress energy tensor (source term) and that dark energy is included in the cosmological constant term? If so, is this the main reason to distinguish between these two forms of ‘darkness’? I ask because I don’t normally read about dark energy being ‘composed of particles’ in the way dark matter is discussed phenomenologically.”
-- CGT

Dear Pete, CGT:

Dark energy is often portrayed as very mysterious. But when you look at the math, it’s really the simplest aspect of general relativity.

Ahead, allow me to clarify that your questions refer to “dark energy” but are specifically about the cosmological constant which is a certain type of dark energy. For all we know, the cosmological constant fits all existing observations. Dark energy could be more complicated than that, but let’s start with the cosmological constant.

Einstein’s field equations can be derived from very few assumptions. First, there’s the equivalence principle, which can be formulated mathematically as the requirement that the equations be tensor-equations. Second, the equations should describe the curvature of space-time. Third, the source of gravity is the stress-energy tensor and it’s locally conserved.

If you write down the simplest equations which fulfill these criteria you get Einstein’s field equations with two free constants. One constant can be fixed by deriving the Newtonian limit and it turns out to be Newton’s constant, G. The other constant is the cosmological constant, usually denoted Λ. You can make the equations more complicated by adding higher order terms, but at low energies these two constants are the only relevant ones.
Einstein's field equations. [Image Source]
If the cosmological constant is not zero, then flat space-time is no longer a solution of the equations. If the constant is positive-valued in particular, space will undergo accelerated expansion if there are no other matter sources, or these are negligible in comparison to Λ. Our universe presently seems to be in a phase that is dominated by a positive cosmological constant – that’s the easiest way to explain the observations which were awarded the 2011 Nobel Prize in physics.

Things get difficult if one tries to find an interpretation of the rather unambiguous mathematics. You can for example take the term with the cosmological constant and not think of it as geometrical, but instead move it to the other side of the equation and think of it as some stuff that causes curvature. If you do that, you might be tempted to read the entries of the cosmological constant term as if it was a kind of fluid. It would then correspond to a fluid with constant density and with constant, negative pressure. That’s something one can write down. But does this interpretation make any sense? I don’t know. There isn’t any known fluid with such behavior.

Since the cosmological constant is also present if matter sources are absent, it can be interpreted as the energy-density and pressure of the vacuum. Indeed, one can calculate such a term in quantum field theory, just that the result is infamously 120 orders of magnitude too large. But that’s a different story and shall be told another time. The cosmological constant term is therefore often referred to as the “vacuum energy,” but that’s sloppy. It’s an energy-density, not an energy, and that’s an important difference.

How can it possibly be that an energy density remains constant as the universe expands, you ask. Doesn’t this mean you need to create more energy from somewhere? No, you don’t need to create anything. This is a confusion which comes about because you interpret the density which has been assigned to the cosmological constant like a density of matter, but that’s not what it is. If it was some kind of stuff we know, then, yes, you would expect the density to dilute as space expands. But the cosmological constant is a property of space-time itself. As space expands, there’s more space, and that space still has the same vacuum energy density – it’s constant!

The cosmological constant term is indeed conserved in general relativity, and it’s conserved separately from that of the other energy and matter sources. It’s just that conservation of stress-energy in general relativity works differently than you might be used to from flat space.

According to Noether’s theorem there’s a conserved quantity for every (continuous) symmetry. A flat space-time is the same at every place and at every moment of time. We say it has a translational invariance in space and time. These are symmetries, and they come with conserved quantities: Translational invariance of space conserves momentum, translational invariance in time conserves energy.

In a curved space-time generically neither symmetry is fulfilled, hence neither energy nor momentum are conserved. So, if you take the vacuum energy density and you integrate it over some volume to get an energy, then the total energy grows with the volume indeed. It’s just not conserved. How strange! But that makes perfect sense: It’s not conserved because space expands and hence we have no invariance in time. Consequently, there’s no conserved quantity for invariance in time.

But General Relativity has a more complicated type of symmetry to which Noether’s theorem can be applied. This gives rise to a local conservation of stress-momentum when coupled to gravity (the stress-momentum tensor is covariantly conserved).

The conservation law for the density of a pressureless fluid, for example, works as you expect it to work: As space expands, the density goes down with the volume. For radiation – which has pressure – the energy density falls faster than that of matter because wavelengths also redshift. And if you put the cosmological constant term with its negative pressure into the conservation law, both energy and pressure remain the same. It’s all consistent: They are conserved if they are constant.

Dark energy now is a generalization of the cosmological constant, in which one invents some fields which give rise to a similar term. There are various fields that theoretical physicists have played with: chameleon fields and phantom fields and quintessence and such. The difference to the cosmological constant is that these fields’ densities do change with time, albeit slowly. There is however presently no evidence that this is the case.

As to the question which dark stuff to include in which term. Dark matter is usually assumed to be pressureless, which means that for what its gravitational pull is concerned it behaves just like normal matter. Dark energy, in contrast, has negative pressure and does odd things. That’s why they are usually collected in different terms.

Why don’t you normally read about dark energy being made of particles? Because you need some really strange stuff to get something that behaves like dark energy. You can’t make it out of any kind of particle that we know – this would either give you a matter term or a radiation term, neither of which does what dark energy needs to do.

If dark energy was some kind of field, or some kind of condensate, then it would be made of something else. In that case its density might indeed also vary from one place to the next and we might be able to detect the presence of that field in some way. Again though, there isn’t presently any evidence for that.

Thanks for your interesting questions!

Wednesday, October 12, 2016

What if dark matter is not a particle? The second wind of modified gravity.

Another year has passed and Vera Rubin was not awarded the Nobel Prize. She’s 88 and the prize can’t be awarded posthumously, so I can’t shake the impression the Royal Academy is waiting for her to die while they work off a backlog of condensed-matter breakthroughs.

Sure, nobody knows whether galaxies actually contain the weakly interacting and non-luminous particles we have come to call dark matter. And Fritz Zwicky was first to notice a cluster of galaxies which moved faster than the visible mass alone could account for – and the one to coin the term dark matter. But it was Rubin who pinned down the evidence that galaxies are systematically misbehaved by showing the rotational velocities of spiral galaxies don’t flatten out with distance from the galactic center – as if there was unseen extra mass in the galaxies. And Zwicky is dead anyway, so the Nobel committee doesn’t have to worry about him.

After Rubin’s discovery, many other observations confirmed that we were missing matter, and not only a little bit, but 80% of all matter in the universe. It’s there, but it’s not some stuff that we know. The fluctuations in the cosmic microwave background, gravitational lensing, the formation of large-scale structures in the universe – none of these would fit with the predictions of general relativity if there wasn’t additional matter to curve space-time. And if you go through all the particles in the standard model, none of them fits the bill. They’re either too light or too heavy or too strongly interacting or too unstable.

But once physicists had the standard model, every problem began to look like a particle, and so, beginning in the mid-1980s, dozens of experiments started to search for dark matter particles. So far, they haven’t found anything. No WIMPS, no axions, no wimpzillas, neutralinos, sterile neutrinos, or other things that would be good candidates for the missing matter.

This might not mean much. It might mean merely that the dark matter particles are even more weakly interacting than expected. It might mean that the particle types we’ve dealt with so far were too simple. Or maybe it means dark matter isn’t made of particles.

It’s an old idea, though one that never rose to popularity, that rather than adding new sources for gravity we could instead keep the known sources but modify the way they gravitate. And the more time passes without a dark matter particle caught in a detector, the more appealing this alternative starts to become. Maybe gravity doesn’t work the way Einstein taught us.

Modified gravity had an unfortunate start because its best known variant – Modified Newtonian Dynamics or MOND – is extremely unappealing from a theoretical point of view. It’s in contradiction with general relativity and that makes it a non-starter for most theorists. Meanwhile, however, there are variants of modified gravity which are compatible with general relativity.

The benefit of modifying gravity is that it offers an explanation for observations that particle dark matter has nothing to say about: Many galaxies show regularities in the way their stars’ motion is affected by dark matter. Clouds of dark particles that would collect in halos around galaxies can be flexibly adapted to match the observations of all observed galaxies. But dark matter particles are so flexible, that it’s difficult to reproduce regularities.

The best known of them is the Tully-Fisher relation, a correlation between the luminosity of a galaxy and the velocity of the outermost stars. Nobody has succeeded to explain this with particle dark matter, but modified gravity can explain it.

In a recent paper, a group of researchers from the United States offers a neat new way to quantify these regularities. They compare the gravitational acceleration that must be acting on stars in galaxies as inferred from observation (gobs) with the gravitational acceleration due to the observed stars and gas, ie baryonic matter (gbar). As expected, the observed gravitational acceleration is much larger than what the visible mass would lead one to expect. They are also, however, strongly correlated with each other (see figure below). It’s difficult to see how particle dark matter could cause this. (Though I would like to see how this plot looks for a ΛCDM simulation. I would still expect some correlation and would prefer not to judge its strength by gut feeling.)

Figure from arXiv:1609.05917 [astro-ph.GA] 

This isn’t so much new evidence as an improved way to quantify existing evidence for regularities in spiral galaxies. Lee Smolin, always quick on his feet, thinks he can explain this correlation with quantum gravity. I don’t quite share his optimism, but it’s arguably intriguing.

Modifying gravity however has its shortcomings. While it seems to work reasonably well on the level of galaxies, it’s hard to make it work for galaxy clusters too. Observations for example of the Bullet cluster (image below) seem to show that the visible mass can be at a different place than the gravitating mass. That’s straight-forward to explain with particle dark matter but difficult to make sense of with modified gravity.

The bullet cluster.
In red: estimated distribution of baryonic mass.
In blue: estimated distribution of gravitating mass, extracted from gravitational lensing.
Source: APOD.

The explanation I presently find most appealing is that dark matter is a type of particle whose dynamical equations sometimes mimic those of modified gravity. This option, pursued, among others, by Stefano Liberati and Justin Khoury, combines the benefits of both approaches without the disadvantages of either. There is, however, a lot of data in cosmology and it will take a long time to find out whether this idea can fit the observations as well – or better – than particle dark matter.

But regardless of what dark matter turns out to be, Rubin’s observations have given rise to one of the most active research areas in physics today. I hope that the Royal Academy eventually wakes up and honors her achievement.

Wednesday, October 05, 2016

Demystifying Spin 1/2

Theoretical physics is the most math-heavy of disciplines. We don’t use all that math because we like to be intimidating, but because it’s the most useful and accurate description of nature we know.

I am often asked to please explain this or that mathematical description in layman terms – and I try to do my best. But truth is, it’s not possible. The mathematical description is the explanation. The best I can do is to summarize the conclusions we have drawn from all that math. And this is pretty much how popular science accounts of theoretical physics work: By summarizing the consequences of lots of math.

This, however, makes science communication in theoretical physics a victim of its own success. If readers get away thinking they can follow a verbal argument, they’re left to wonder why physicists use all that math to begin with. Sometimes I therefore wish articles reporting on recent progress in theoretical physics would on occasion have an asterisk that notes “It takes several years of lectures to understand how B follows from A.”

One of the best examples for the power of math in theoretical physics – if not the best example to illustrate this – are spin 1/2 particles. They are usually introduced as particles that have to be rotated twice to return to the same initial state. I don’t know if anybody who didn’t know the math already has ever been able to make sense of this explanation – certainly not me when I was a teenager.

But this isn’t the only thing you’ll stumble across if you don’t know the math. Your first question may be: Why have spin 1/2 to begin with?

Well, one answer to this is that we need spin 1/2 particles to describe observations. Such particles are fermionic and therefore won’t occupy the same quantum state. (It takes several years of lectures to understand how B follows from A.) This is why for example electrons – which have spin 1/2 – sit in shells around the atomic nucleus rather than clumping together.

But a better answer is “Why not?” (Why not?, it turns out, is also a good answer to most why-questions that Kindergartners come up with.)

Mathematics allows you to classify everything a quantum state can do under rotations. If you do that you not only find particles that return to their initial state after 1, 1/2, 1/3 and so on of a rotation – corresponding to spin 1, 2, 3... etc – you also find particles that return to their initial state after 2, 2/3, 2/5 and so on of a rotation – corresponding to spin 1/2, 3/2, 5/2 etc. The spin, generally, is the inverse of the fraction of rotations necessary to return the particle to itself. The one exception is spin 0 which doesn’t change at all.

So the math tells you that spin 1/2 is a thing, and it’s there in our theories already. It would be stranger if it nature didn’t make use of it.

But how come that the math gives rise to such strange and non-intuitive particle behaviors? It comes from the way that rotations (or symmetry transformations more generally) act on quantum states, which is different from how they act on non-quantum states. A symmetry transformation acting on a quantum state must be described by a unitary transformation – this is a transformation which, most importantly, ensures that probabilities always add up to one. And the full set of all symmetry transformations must be described by a “unitary representation” of the group.

Symmetry groups, however, can be difficult to handle, and so physicists prefer to instead work with the algebra associated to the group. The algebra can be used to build up the group, much like you can build up a grid from right-left steps and forwards-backwards steps, repeated sufficiently often. But here’s where things get interesting: If you use the algebra of the rotation group to describe how particles transform, you don’t get back merely the rotation group. Instead you get what’s called a “double cover” of the rotation group. It means – guess! – you have to turn the state around twice to get back to the initial state.

I’ve been racking my brain trying to find a good metaphor for “double-cover” to use in the-damned-book I’m writing. Last year, I came across the perfect illustration in real life when we took the kids to a Christmas market. Here it is:

I made a sketch of this for my book:

The little trolley has to make two full rotations to get back to the starting point. And that’s pretty much how the double-cover of the rotation group gives rise to particles with spin 1/2. Though you might have to wrap your head around it twice to understand how it works.

I later decided not to use this illustration in favor of one easier to generalize to higher spin. But you’ll have to buy the-damned-book to see how this works :p

Tuesday, September 27, 2016

Dear Dr B: What do physicists mean by “quantum gravity”?

[Image Source:]
“please could you give me a simple definition of "quantum gravity"?


Dear J,

Physicists refer with “quantum gravity” not so much to a specific theory but to the sought-after solution to various problems in the established theories. The most pressing problem is that the standard model combined with general relativity is internally inconsistent. If we just use both as they are, we arrive at conclusions which do not agree with each other. So just throwing them together doesn’t work. Something else is needed, and that something else is what we call quantum gravity.

Unfortunately, the effects of quantum gravity are very small, so presently we have no observations to guide theory development. In all experiments made so far, it’s sufficient to use unquantized gravity.

Nobody knows how to combine a quantum theory – like the standard model – with a non-quantum theory – like general relativity – without running into difficulties (except for me, but nobody listens). Therefore the main strategy has become to find a way to give quantum properties to gravity. Or, since Einstein taught us gravity is nothing but the curvature of space-time, to give quantum properties to space and time.

Just combining quantum field theory with general relativity doesn’t work because, as confirmed by countless experiments, all the particles we know have quantum properties. This means (among many other things) they are subject to Heisenberg’s uncertainty principle and can be in quantum superpositions. But they also carry energy and hence should create a gravitational field. In general relativity, however, the gravitational field can’t be in a quantum superposition, so it can’t be directly attached to the particles, as it should be.

One can try to find a solution to this conundrum, for example by not directly coupling the energy (and related quantities like mass, pressure, momentum flux and so on) to gravity, but instead only coupling the average value, which behaves more like a classical field. This solves one problem, but creates a new one. The average value of a quantum state must be updated upon measurement. This measurement postulate is a non-local prescription and general relativity can’t deal with it – after all Einstein invented general relativity to get rid of the non-locality of Newtonian gravity. (Neither decoherence nor many worlds remove the problem, you still have to update the probabilities, somehow, somewhere.)

The quantum field theories of the standard model and general relativity clash in other ways. If we try to understand the evaporation of black holes, for example, we run into another inconsistency: Black holes emit Hawking-radiation due to quantum effects of the matter fields. This radiation doesn’t carry information about what formed the black hole. And so, if the black hole entirely evaporates, this results in an irreversible process because from the end-state one can’t infer the initial state. This evaporation however can’t be accommodated in a quantum theory, where all processes can be time-reversed – it’s another contradiction that we hope quantum gravity will resolve.

Then there is the problem with the singularities in general relativity. Singularities, where the space-time curvature becomes infinitely large, are not mathematical inconsistencies. But they are believed to be physical nonsense. Using dimensional analysis, one can estimate that the effects of quantum gravity should become large close by the singularities. And so we think that quantum gravity should replace the singularities with a better-behaved quantum space-time.

The sought-after theory of quantum gravity is expected to solve these three problems: tell us how to couple quantum matter to gravity, explain what happens to information that falls into a black hole, and avoid singularities in general relativity. Any theory which achieves this we’d call quantum gravity, whether or not you actually get it by quantizing gravity.

Physicists are presently pursuing various approaches to a theory of quantum gravity, notably string theory, loop quantum gravity, asymptotically safe gravity, and causal dynamical triangulation, for just to name the most popular ones. But none of these approaches has experimental evidence speaking for it. Indeed, so far none of them has made a testable prediction.

This is why, in the area of quantum gravity phenomenology, we’re bridging the gap between theory and experiment with simplified models, some of which motivated by specific approaches (hence: string phenomenology, loop quantum cosmology, and so on). These phenomenological models don’t aim to directly solve the above mentioned problems, they merely provide a mathematical framework – consistent in its range of applicability – to quantify and hence test the presence of effects that could be signals of quantum gravity, for example space-time fluctuations, violations of the equivalence principle, deviations from general relativity, and so on.

Thanks for an interesting question!

Wednesday, September 21, 2016

We understand gravity just fine, thank you.

Yesterday I came across a Q&A on the website of Discover magazine, titled “The Root of Gravity - Does recent research bring us any closer to understanding it?” Jeff Lepler from Michigan has the following question:
Q: Are we any closer to understanding the root cause of gravity between objects with mass? Can we use our newly discovered knowledge of the Higgs boson or gravitational waves to perhaps negate mass or create/negate gravity?”
A person by name Bill Andrews (unknown to me) gives the following answer:
A: Sorry, Jeff, but scientists still don’t really know why gravity works. In a way, they’ve just barely figured out how it works.”
The answer continues, but let’s stop right there where the nonsense begins. What’s that even mean scientists don’t know “why” gravity works? And did the Bill person really think he could get away with swapping “why” for a “how” and nobody would notice?

The purpose of science is to explain observations. We have a theory by name General Relativity that explains literally all data of gravitational effects. Indeed, that General Relativity is so dramatically successful is a great frustration for all those people who would like to revolutionize science a la Einstein. So in which sense, please, do scientists barely know how it works?

For all we can presently tell gravity is a fundamental force, which means we have no evidence for an underlying theory from which gravity could be derived. Sure, theoretical physicists are investigating whether there is such an underlying theory that would give rise to gravity as well as the other interactions, a “theory of everything”. (Please submit nomenclature complaints to your local language police, not to me.) Would such a theory of everything explain “why” gravity works? No, because that’s not a meaningful scientific question. A theory of everything could potentially explain how gravity can arise from more fundamental principles similar to, say, the ideal gas law can arise from statistical properties of many atoms in motion. But that still wouldn’t explain why there should be something like gravity, or anything, in the first place.

Either way, even if gravity arises within a larger framework like, say, string theory, the effects of what we call gravity today would still come about because energy-densities (and related quantities like pressure and momentum flux and so on) curve space-time, and fields move in that space-time. Just that these quantities might no longer be fundamental. We’ve known since 101 years how this works.

After a few words on Newtonian gravity, the answer continues:
“Because the other forces use “force carrier particles” to impart the force onto other particles, for gravity to fit the model, all matter must emit gravitons, which physically embody gravity. Note, however, that gravitons are still theoretical. Trying to reconcile these different interpretations of gravity, and understand its true nature, are among the biggest unsolved problems of physics.”
Reconciling which different interpretations of gravity? These are all the same “interpretation.” It is correct that we don’t know how to quantize gravity so that the resulting theory remains viable also when gravity becomes strong. It’s also correct that the force-carrying particle associated to the quantization – the graviton – hasn’t been detected. But the question was about gravity, not quantum gravity. Reconciling the graviton with unquantized gravity is straight-forward – it’s called perturbative quantum gravity –  and exactly the reason most theoretical physicists are convinced the graviton exists. It’s just that this reconciliation breaks down when gravity becomes strong, which means it’s only an approximation.
“But, alas, what we do know does suggest antigravity is impossible.”
That’s correct on a superficial level, but it depends on what you mean by antigravity. If you mean by antigravity that you can let any of the matter which surrounds us “fall up” it’s correct. But there are modifications of general relativity that have effects one can plausibly call anti-gravitational. That’s a longer story though and shall be told another time.

A sensible answer to this question would have been:
“Dear Jeff,

The recent detection of gravitational waves has been another confirmation of Einstein’s theory of General Relativity, which still explains all the gravitational effects that physicists know of. According to General Relativity the root cause of gravity is that all types of energy curve space-time and all matter moves in this curved space-time. Near planets, such as our own, this can be approximated to good accuracy by Newtonian gravity.

There isn’t presently any observation which suggests that gravity itself emergens from another theory, though it is certainly a speculation that many theoretical physicists have pursued. There thus isn’t any deeper root for gravity because it’s presently part of the foundations of physics. The foundations are the roots of everything else.

The discovery of the Higgs boson doesn’t tell us anything about the gravitational interaction. The Higgs boson is merely there to make sure particles have mass in addition to energy, but gravity works the same either way. The detection of gravitational waves is exciting because it allows us to learn a lot about the astrophysical sources of these waves. But the waves themselves have proved to be as expected from General Relativity, so from the perspective of fundamental physics they didn’t bring news.

Within the incredibly well confirmed framework of General Relativity, you cannot negate mass or its gravitational pull. 
You might also enjoy hearing what Richard Feynman had to say when he was asked a similar question about the origin of the magnetic force:

This answer really annoyed me because it’s a lost opportunity to explain how well physicists understand the fundamental laws of nature.

Thursday, September 15, 2016

Experimental Search for Quantum Gravity 2016

Research in quantum gravity is quite a challenge since we neither have a theory nor data. But some of us like a challenge.

So far, most effort in the field has gone into using requirements of mathematical consistency to construct a theory. It is impossible of course to construct a theory based on mathematical consistency alone, because we can never prove our assumptions to be true. All we know is that the assumptions give rise to good predictions in the regime where we’ve tested them. Without assumptions, no proof. Still, you may hope that mathematical consistency tells you where to look for observational evidence.

But in the second half of the 20th century, theorists have used the weakness of gravity as an excuse to not think about how to experimentally test quantum gravity at all. This isn’t merely a sign of laziness, it’s back to the days when philosophers believed they could find out how nature works by introspection. Just that now many theoretical physicists believe mathematical introspection is science. Particularly disturbing to me is how frequently I speak with students or young postdocs who have never even given thought to the question what makes a theory scientific. That’s one of the reasons the disconnect between physics and philosophy worries me.

In any case, the cure clearly isn’t more philosophy, but more phenomenology. The effects of quantum gravity aren’t necessarily entirely out of experimental reach. Gravity isn’t generally a weak force, not in the same way that, for example, the weak nuclear force is weak. That’s because the effects of gravity get stronger with the amount of mass (or energy) that exerts the force. Indeed, this property of the gravitational force is the very reason why it’s so hard to quantize.

Quantum gravitational effects hence were strong in the early universe, they are strong inside black holes, and they can be non-negligible for massive objects that have pronounced quantum properties. Furthermore, the theory of quantum gravity can be expected to give rise to deviations from general relativity or the symmetries of the standard model, which can have consequences that are observable even at low energies.

The often repeated argument that we’d need to reach enormously high energies – close by the Planck energy, 16 orders of magnitude higher than LHC energies – is simply wrong. Physics is full with examples of short-distance phenomena that give rise to effects at longer distances, such as atoms causing Brownian motion, or quantum electrodynamics allowing stable atoms to begin with.

I have spent the last 10 years or so studying the prospects to find experimental evidence for quantum gravity. Absent a fully-developed theory we work with models to quantify effects that could be signals of quantum gravity, and aim to test these models with data. The development of such models is relevant to identify promising experiments to begin with.

Next week, we will hold the 5th international conference on Experimental Search for Quantum Gravity, here in Frankfurt. And I dare to say we have managed to pull together an awesome selection of talks.

We’ll hear about the prospects of finding evidence for quantum gravity in the CMB (Bianchi, Krauss, Vennin) and in quantum oscillators (Paternostro). We have a lecture about the interface between gravity and quantum physics, both on long and short distances (Fuentes), and a talk on how to look for moduli and axion fields that are generic consequences of string theory (Conlon). Of course we’ll also cover Loop Quantum Cosmology (Barrau), asymptotically safe gravity (Eichhorn), and causal sets (Glaser). We’re super up-to-date by having a talk about constraints from the LIGO gravitational wave-measurements on deviations from general relativity (Yunes), and several of the usual suspects speaking about deviations from Lorentz-invariance (Mattingly), Planck stars (Rovelli, Vidotto), vacuum dispersion (Giovanni), and dimensional reduction (Magueijo). There’s neutrino physics (Paes), a talk about what the cosmological constant can tell us about new physics (Afshordi), and, and, and!

You can download the abstracts here and the timetable here.

But the best is I’m not telling you this to depress you because you can’t be with us, but because our IT guys still tell me we’ll both record the talks and livestream them (to the extent that the speakers consent of course). I’ll share the URL with you here once everything is set up, so stay tuned.

Update:Streaming link will be posted on the institute's main page briefly before the event. Another update: Lifestream is available here.

Sunday, September 11, 2016

I’ve read a lot of books recently

[Reading is to writing what eating is to...]

Dreams Of A Final Theory: The Scientist's Search for the Ultimate Laws of Nature
Steven Weinberg
Vintage, Reprint Edition (1994)

This book appeared when I was still in high school and I didn’t take note of it then. Later it seemed too out-of-date to bother, but meanwhile it’s almost become a historical document. Written with the pretty explicit aim to argue in favor of the Superconducting Supercollider (a US-proposal for a large particle collider that was scraped in the early 90s), it’s the most flawless popular science book about theoretical physics I’ve ever come across.

Weinberg’s explanations are both comprehensible and remarkably accurate. The book contains no unnecessary clutter, is both well-structured and well written, and Weinberg doesn’t hold back with his opinions, neither on religion nor on philosophy.

It’s also the first time I’ve tried an audio-book. I listened to it while treadmill running. A lot of sweat went into the first chapters. But I gave up half through and bought the paperback which I read on the plane to Austin. Weinberg is one of the people I interviewed for my book.

Lesson learned: Audiobooks aren’t for me.

Truth And Beauty – Aesthetics and Motivations in Science
Subrahmanyan Chandrasekhar
University of Chicago Press (1987)

I had read this book before but wanted to remind me of its content. It’s a collection of essays on the role of beauty in physics, mostly focused on general relativity and the early 20th century. Along historical examples like Milne, Eddington, Weyl, and Einstein, Chandrasekhar discusses various aspects of beauty, like elegance, simplicity, or harmony. I find it too bad that Chandrasekhar didn’t bring in more of his own opinion but mostly summarizes other people’s thoughts.

Lesson learned: Tell the reader what you think.

Truth or Beauty – Science and the Quest for Order
David Orrell
Yale University Press (2012)

In this book, mathematician David Orrell argues that beauty isn’t a good guide to truth. It’s an engagingly written book which covers a lot of ground, primarily in physics, from helocentrism to string theory. But Orrell tries too hard to make everything fit his bad-beauty narrative. Many of his interpretations are over-the-top, like his complaint that
 “[T]he aesthetics of science – and particularly the “hard” sciences such as physics –have been characterized by a distinctly male feel. For example, feminist psychologists have noted that the classical picture of the atom as hard, indivisible, independent, separate, and so on corresponds very closely to the stereotypically masculine sense of self. If must have come as a shock to the young, male, champions of quantum theory when they discovered that their equations describing the atom were actually soft, fuzzy, and uncertain –in other words, stereotypically female.”
He further notes that many male physicists like to refer to nature as “she,” that Gell-Mann likes the idea of using particle accelerators to penetrate deeper (into the structure of particles), and quotes Lee Smolin’s remark that “the most cherished goal in physics, as in bad romance novels, is unification.” This is just to illustrate the, erm, depth of Orrell’s arguments.

In summary, it’s a nice book, but it’s hard to take Orrell’s argument seriously. Or maybe the whole thing was a joke to begin with.

Lesson learned: Don’t try to explain everything.

The End Of Physics - The Myth Of A Unified Theory
David Lindley
Basic Books (1994)

This is a strange book. While reading, I got the impression that the author is constantly complaining about something, but it didn’t become clear to me what. Lindley tells the story of how physicists discovered increasingly more fundamental and also more unified laws of nature, and how they are hoping to finally develop a theory of everything. This, so he writes, would be the end of physics. Just that, as he explains in the next sentence, it of course wouldn’t be the end of physics.

Lindley likes words and likes to use a lot of them. Consequently the book reads like he wanted to cram in the whole history of physics, from the beginning to the end, with him having the last word.

His argument for why a theory of everything would remain a “myth” is essentially that it would be hard to test, something that nobody can really disagree on. But “hard to test” doesn’t mean “impossible to test,” and Lindley is clearly out of his water when it comes to evaluating experimental prospects of, say, probing quantum gravity, so he sticks with superficial polemics. Of course the book is 20 years old, and I can’t blame the author for not knowing what’s happened since, but from today’s perspective his rant seems baseless.

In summary, it’s a well-written book, but it has a fuzzy message. (Also, the reprint quality is terrible.)

Lesson learned: If you have something to say, say it.

Why Beauty Is Truth – A History of Symmetry
Ian Stewart
Basic Books (2007)

This is a book, not about the physics, but the mathematics of symmetries, symmetry groups, Lie groups, Lie algebras, quaternions, global symmetries, local symmetries, and all that. Steward also discusses the relevance of these structure for physics, but his emphasis is on it being an application of mathematics. The book is held together by stories of the mathematicians who lead the way. The title of the book is somewhat misleading. Steward actually doesn’t discuss much the question “why” beauty is truth. He merely demonstrates along examples that many truths are beautiful.

It’s a pretty good book, both interesting and well-written, if somewhat too long for my taste. It doesn’t seem to have gotten the attention it deserves.

Lesson learned: It’s hard to write a popular science book that anyone will still recall a decade later.

Eyes On The Sky: A Spectrum of Telescopes
Francis Graham-Smith
Oxford University Press (2016)

This is a book about telescopes, from then to now, from the radio regime to gamma rays. It’s not a book about astrophysics, it’s not a book about cosmology, and it’s not a book about history. It’s a book about telescopes. It is a thoroughly useful book, full of facts and figures and images, but you need to be really interested in telescopes to get through it. I read this book because I wanted to write a paragraph about the development of modern telescopes but figured I didn’t actually know much about modern telescopes. Now I’m much wiser.

Lesson learned: If you need to read a 200 pages book to write a single paragraph, you’ll never get done.

Beauty and Revolution in Science
James McAllister
Cornell University Press (1999)

Philosopher James McAllister reexamines the Kuhnian idea of paradigm changes. He proposes that it should be amended, and argues that what characterizes a revolution is not the change of the entire scientific paradigm, but merely the change of aesthetic ideals. To back up his argument, he discusses several historical cases. This is not a popular science book, and it’s not always the most engaging read, but I have found it to be very insightful. It is somewhat unfortunate though that he didn’t spend more time illuminating the social dynamics that goes with the prevalence of beauty ideals in science.

Lesson learned: Philosophy isn’t dead.

Higher Speculations
Helge Kragh
Oxford University Press (2011)

Kragh’s is a book about the failure of speculative ideas in physics. The steady state universe, mechanism, cyclic models of the universe, and various theories of everything are laid out in historical perspective. I have found this book both interesting and useful, but some parts are quite heavy reads. Kragh doesn’t offer an analysis or draws a lesson, and he mostly restrains from judgement. He simply tells the reader what happened.

Lesson learned: Even smart people sometimes believe really strange things.

Supersymmetry: Unveiling The Ultimate Laws Of Nature
Gordy Kane
Basic Books (2001)

Particle physicist Gordon Kane explains why the supersymmetric extension of the standard model has become so popular and how it could be tested. Whether or not you are convinced by supersymmetry, you get to learn a lot about particle physics. It’s a straight-forward pop-science book that does a good job explaining why theorists have spent so much time on supersymmetry.

Lesson learned: You don’t need to write fancy to write well.

Nature’s Blueprint - Supersymmetry and the Search for a Unified Theory of Matter and Force
Dan Hooper
Smithsonian (2008)

A book about high energy particle physics, the standard model, unification and the appeal of supersymmetry. It’s a well-written book that gives the reader a pretty good idea how particle physicists work and think. Hooper does a great job getting across the excitement that comes with the hope of being about to discovery a new fundamental law of nature. The book’s publication date was well timed, just before the LHC started taking data.

Lesson learned: Your book might become history faster than you think.

Tuesday, September 06, 2016

Sorry, the universe wasn’t made for you

Last month, game reviewers were all over No Man’s Sky, a new space adventure launched to much press attention. Unlike previous video games, this one calculates players’ environments from scratch rather than revealing hand-crafted landscapes and creatures. The calculations populate No Man’s Sky’s virtual universe with about 1019 planets, all with different flora and fauna – at least that’s what we’re told, not like anyone actually checked. That seems a giganourmous number but is still less than there’s planets in the actual universe, estimated at roughly 1024.

User’s expectations of No Man’s Sky were high – and were highly disappointed. All the different planets, it turns out, still get a little repetitive with their limited set of options and features. It’s hard to code a universe as surprising as reality and run it on processors that occupy only a tiny fraction of that reality.

Theoretical physicists, meanwhile, have the opposite problem: The fictive universes they calculate are more surprising than they’d like them to be.

Having failed on their quest for a theory of everything, in the area of quantum gravity many theoretical physicists now accept that a unique theory can’t be derived from first principles. Instead, they believe, additional requirements must be used to select the theory that actually describes the universe we observe. That, of course, is what we’ve always done to develop theories – the additional requirements being empirical adequacy.

The new twist is that many of these physicists think the missing observational input is the existence of life in our universe. I hope you just raised an eyebrow or two because physicists don’t normally have much business with “life.” And indeed, they usually only speak about preconditions of life, such as atoms and molecules. But that the sought-after theory must be rich enough to give rise to complex structures has become the most popular selection principle.

Known as “anthropic principle” this argument allows physicists to discard all theories that can’t produce sentient observers on the rationale that we don’t inhabit a universe that lacks them. One could of course instead just discard all theories with parameters that don’t match the measured values, but that would be so last century.

The anthropic principle is often brought up in combination with the multiverse, but logically it’s a separate argument. The anthropic principle – that our theories must be compatible with the existence of life in our universe – is an observational requirement that can lead to constraints on the parameters of a theory. This requirement must be fulfilled whether or not universes for different parameters actually exist. In the multiverse, however, the anthropic principle is supposedly the only criterion by which to select the theory for our universe, at least in terms of probability so that we are likely to find ourselves here. Hence the two are often discussed together.

Anthropic selection had a promising start with Weinberg’s prescient estimate for the cosmological constant. But the anthropic princple hasn’t solved the problem it was meant to solve, because it does not single out one unique theory either. This has been known at least since a decade, but the myth that our universe is “finetuned for life” still hasn’t died.

The general argument against the success of anthropic selection is that all evidence for the finetuning of our theories explores only a tiny space of all possible combinations of parameters. A typical argument for finetuning goes like this: If parameter X was only a tiny bit larger or smaller than the observed value, then atoms couldn’t exist or all stars would collapse or something similarly detrimental to the formation of large molecules. Hence, parameter X must have a certain value to high precision. However, these arguments for finetuning – of which there exist many – don’t take into account simultaneous changes in several parameters and are therefore inconclusive.

Importantly, besides this general argument there also exist explicit counterexamples. In the 2006 paper A Universe Without Weak Interactions, Harnik, Kribs, and Perez discussed a universe that seems capable of complex chemistry and yet has fundamental particles entirely different from our own. More recently, Abraham Loeb from Harvard argued that primitive forms of life might have been possible already in the early universe under circumstances very different from today’s. And a recent paper (ht Jacob Aron) adds another example:

    Stellar Helium Burning in Other Universes: A solution to the triple alpha fine-tuning problem
    By Fred C. Adams and Evan Grohs
    1608.04690 [astro-ph.CO]

In this work the authors show that some combinations of fundamental constants would actually make it easier for stars to form Carbon, an element often assumed to be essential for the development of life.

This is a fun paper because it extends on the work by Fred Hoyle, who was the first to use the anthropic principle to make a prediction (though some historians question whether that was his actual motivation). He understood that it’s difficult for stars to form heavy elements because the chain is broken in the first steps by Beryllium. Beryllium has atomic number 4, but the version that’s created in stellar nuclear fusion from Helium (with atomic number 2) is unstable and therefore can’t be used to build even heavier nuclei.

Hoyle suggested that the chain of nuclear fusion avoids Beryllium and instead goes from three Helium nuclei straight to carbon (with atomic number 6). Known as the triple-alpha process (because Helium nuclei are also referred to as alpha-particles), the chances of this happening are slim – unless the Helium merger hits a resonance of the Carbon nucleus. Which it does if the parameters are “just right.” Hoyle hence concluded that such a resonance must exist, and that was later experimentally confirmed.

Adams and Groh now point out that there are other sets of parameters altogether in which case Beryllium is just stable and the Carbon resonance doesn’t have to be finely tuned. In their paper, they do not deal with the fundamental constants that we normally use in the standard model – they instead discuss nuclear structure which has constants that are derived from the standard model constants, but are quite complicated functions thereof (if known at all). Still, they have basically invented a fictional universe that seems at least as capable of producing life as ours.

This study is hence another demonstration that a chemistry complex enough to support life can arise under circumstances that are not anything like the ones we experience today.

I find it amusing that many physicists believe the evolution of complexity is the exception rather than the rule. Maybe it’s because they mostly deal with simple systems, at equilibrium or close by equilibrium, with few particles, or with many particles of the same type – systems that the existing math can deal with.

It makes me wonder how many more fictional universes physicists will invent and write papers about before they bury the idea that anthropic selection can single out a unique theory. Fewer, I hope, than there are planets in No Man’s Sky.

Monday, August 29, 2016

Dear Dr. B: How come we never hear of a force that the Higgs boson carries?

    “Dear Dr. Hossenfelder,

    First, I love your blog. You provide a great insight into the world of physics for us laymen. I have read in popular science books that the bosons are the ‘force carriers.’ For example the photon carries the electromagnetic force, the gluon, the strong force, etc. How come we never hear of a force that the Higgs boson carries?

    Ramiro Rodriguez
Dear Ramiro,

The short answer is that you never hear of a force that the Higgs boson carries because it doesn’t carry one. The longer answer is that not all bosons are alike. This of course begs the question just how the Higgs-boson is different, so let me explain.

The standard model of particle physics is based on gauge symmetries. This basically means that the laws of nature have to remain invariant under transformations in certain internal spaces, and these transformations can change from one place to the next and one moment to the next. They are what physics call “local” symmetries, as opposed to “global” symmetries whose transformations don’t change in space or time.

Amazingly enough, the requirement of gauge symmetry automatically explains how particles interact. It works like this. You start with fermions, that are particles of half-integer spin, like electrons, muons, quarks and so on. And you require that the fermions’ behavior must respect a gauge symmetry, which is classified by a symmetry group. Then you ask what equations you can possibly get that do this.

Since the fermions can move around, the equations that describe what they do must contain derivatives both in space and in time. This causes a problem, because if you want to know how the fermions’ motion changes from one place to the next you’d also have to know what the gauge transformation does from one place to the next, otherwise you can’t tell apart the change in the fermions from the change in the gauge transformation. But if you’d need to know that transformation, then the equations wouldn’t be invariant.

From this you learn that the only way the fermions can respect the gauge symmetry is if you introduce additional fields – the gauge fields – which exactly cancel the contribution from the space-time dependence of the gauge transformation. In the standard model the gauge fields all have spin 1, which means they are bosons. That's because to cancel the terms that came from the space-time derivative, the fields need to have the same transformation behavior as the derivative, which is that of a vector, hence spin 1.

To really follow this chain of arguments – from the assumption of gauge symmetry to the presence of gauge-bosons – requires several years’ worth of lectures, but the upshot is that the bosons which exchange the forces aren’t added by hand to the standard model, they are a consequence of symmetry requirements. You don’t get to pick the gauge-bosons, neither their number nor their behavior – their properties are determined by the symmetry.

In the standard model, there are 12 such force-carrying bosons: the photon (γ), the W+, W-, Z, and 8 gluons. They belong to three gauge symmetries, U(1), SU(2) and SU(3). Whether a fermion does or doesn’t interact with a gauge-boson depends on whether the fermion is “gauged” under the respective symmetry, ie transforms under it. Only the quarks, for example, are gauged under the SU(3) symmetry of the strong interaction, hence only the quarks couple to gluons and participate in that interaction. The so-introduced bosons are sometimes specifically referred to as “gauge-bosons” to indicate their origin.

The Higgs-boson in contrast is not introduced by a symmetry requirement. It has an entirely different function, which is to break a symmetry (the electroweak one) and thereby give mass to particles. The Higgs doesn’t have spin 1 (like the gauge-bosons) but spin 0. Indeed, it is the only presently known elementary particle with spin zero. Sheldon Glashow has charmingly referred to the Higgs as the “flush toilet” of the standard model – it’s there for a purpose, not because we like the smell.

The distinction between fermions and bosons can be removed by postulating an exchange symmetry between these two types of particles, known as supersymmetry. It works basically by generalizing the concept of a space-time direction to not merely be bosonic, but also fermionic, so that there is now a derivative that behaves like a fermion.

In the supersymmetric extension of the standard model there are then partner particles to all already known particles, denoted either by adding an “s” before the particle’s name if it’s a boson (selectron, stop quark, and so on) or adding “ino” after the particle’s name if it’s a fermion (Wino, photino, and so on). There is then also Higgsino, which is the partner particle of the Higgs and has spin 1/2. It is gauged under the standard model symmetries, hence participates in the interactions, but still is not itself consequence of a gauge.

In the standard model most of the bosons are also force-carriers, but bosons and force-carriers just aren’t the same category. To use a crude analogy, just because most of the men you know (most of the bosons in the standard model) have short hair (are force-carriers) doesn’t mean that to be a man (to be a boson) you must have short hair (exchange a force). Bosons are defined by having integer spin, as opposed to the half-integer spin that fermions have, and not by their ability to exchange interactions.

In summary the answer to your question is that certain types of bosons – the gauge bosons – are a consequence of symmetry requirements from which it follows that these bosons do exchange forces. The Higgs isn’t one of them.

Thanks for an interesting question!

Peter Higgs receiving the Nobel Prize from the King of Sweden.
[Img Credits: REUTERS/Claudio Bresciani/TT News Agency]

Previous Dear-Dr-B’s that you might also enjoy:

Wednesday, August 24, 2016

What if the universe was like a pile of laundry?

    What if the universe was like a pile of laundry?

    Have one.

    See this laundry pile? Looks just like our universe.


    Here, have another.

    See it now? It’s got three dimensions and all.

    But look again.

    The shirts and towels, they’re really crinkled and interlocked two-dimensional surfaces.


    It’s one-dimensional yarn, knotted up tightly.

    You ok?

    Have another.

    I see it clearly now. It’s everything at once, one-two-three dimensional. Just depends on how closely you look at it.

    Amazing, don’t you think? What if our universe was just like that?

Universal Laundry Pile.
[Img Src: Clipartkid]

It doesn’t sound like a sober thought, but it’s got math behind it, so physicists think there might be something to it. Indeed the math piled up lately. They call it “dimensional reduction,” the idea that space on short distances has fewer than three dimensions – and it might help physicists to quantize gravity.

We’ve gotten used to space with additional dimensions, rolled up so small we can’t observe them. But how do you get rid of dimensions instead? To understand how it works we first have clarify what we mean by “dimension.”

We normally think about dimensions of space by picturing lines which spread from a point. How quickly the lines dilute with the distance from the point tells us the “Hausdorff dimension” of a space. The faster the lines diverge from each other with distance, the larger the Hausdorff dimension. If you speak through a pipe, for example, sound waves spread less and your voice carries farther. The pipe hence has a lower Hausdorff dimension than our normal 3-dimensional office cubicles. It’s the Hausdorff dimension that we colloquially refer to as just dimension.

For dimensional reduction, however, it is not the Hausdorff dimension which is relevant, but instead the “spectral dimension,” which is a slightly different concept. We can calculate it by first getting rid of the “time” in “space-time” and making it into space (period). We then place a random walker at one point and measure the probability that it returns to the same point during its walk. The smaller the average return probability, the higher the probability the walker gets lost, and the higher the number of spectral dimensions.

Normally, for a non-quantum space, both notions of dimension are identical. However, add quantum mechanics and the spectral dimension at short distances goes down from four to two. The return probability for short walks becomes larger than expected, and the walker is less likely to get lost – this is what physicists mean by “dimensional reduction.”

The spectral dimension is not necessarily an integer; it can take on any value. This value starts at 4 when quantum effects can be neglected, and decreases when the walker’s sensitivity to quantum effects at shortest distances increases. Physicists therefore also like to say that the spectral dimension “runs,” meaning its value depends on the resolution at which space-time is probed.

Dimensional reduction is an attractive idea because quantizing gravity is considerably easier in lower dimensions where the infinities that plague traditional attempts to quantize gravity go away. A theory with a reduced number of dimensions at shortest distances therefore has much higher chances to remain consistent and so to provide a meaningful theory for the quantum nature of space and time. Not so surprisingly thus, among physicists, dimensional reduction has received quite some attention lately.

This strange property of quantum-spaces was first found in Causal Dynamical Triangulation (hep-th/0505113), an approach to quantum gravity that relies on approximating curved spaces by triangular patches. In this work, the researchers did a numerical simulation of a random walk in such a triangulized quantum-space, and found that the spectral dimension goes down from four to two. Or actually to 1.80 ± 0.25 if you want to know precisely.

Instead of doing numerical simulations, it is also possible to study the spectral dimension mathematically, which has since been done in various other approaches. For this, physicists exploit that the behavior of the random walk is governed by a differential equation – the diffusion equation – which depends on the curvature of space. In quantum gravity, the curvature has quantum fluctuations, and then it’s instead its average value which enters the diffusion equation. From the diffusion equation one then calculates the return probability for the random walk.

This way, physicists have inferred the spectral dimension also in Asymptotically Safe Gravity (hep-th/0508202), an approach to quantum gravity which relies on the resolution-dependence (the “running”) of quantum field theories. And they found the same drop from four to two spectral dimensions.

Another indication comes from Loop Quantum Gravity, where the scaling of the area operator with length changes at short distances. In this case is somewhat questionable whether the notion of curvature makes sense at all on short distances. But ignoring this, one can construct the diffusion equation and finds that the spectral dimension drops from four to two (0812.2214).

And then there is Horava-Lifshitz gravity, yet another modification of gravity which some believe helps with quantizing it. Here too, dimensional reduction has been found (0902.3657).

It is difficult to visualize what is happening with the dimensionality of space if it goes down continuously, rather than in discrete steps as in the example with the laundry pile. Maybe a good way to picture it, as Calcagni, Eichhorn and Saueressig suggest, is to think of the quantum fluctuations of space-time hindering a particle’s random walk, thereby slowing it down. It wouldn’t have to be that way. Quantum fluctuations could also kick the particle around wildly, thereby increasing the spectral dimension rather than decreasing it. But that’s not what the math tells us.

One shouldn’t take this picture too seriously though, because we’re talking about a random walk in space, not space-time, and so it’s not a real physical process. Turning time into space might seem strange, but it is a common mathematical simplification which is often used for calculations in quantum theory. Still, it makes it difficult to interpret what is happening physically.

I find it intriguing that several different approaches to quantum gravity share a behavior like this. Maybe it is a general property of quantum space-time. But then, there are many different types of random walks, and while these different approaches to quantum gravity share a similar scaling behavior for the spectral dimension, they differ in the type of random walk that produces this scaling (1304.7247). So maybe the similarities are only superficial.

And of course this idea has no observational evidence speaking for it. Maybe never will. But one day, I’m sure, all the math will click into place and everything will make perfect sense. Meanwhile, have another.

[This article first appeared on Starts With A Bang under the title Dimensional Reduction: The Key To Physics' Greatest Mystery?]

Friday, August 19, 2016

Away Note

I'll be in Stockholm next week for a program on Black Holes and Emergent Spacetime, so please be prepared for some service interruptions.

Monday, August 15, 2016

The Philosophy of Modern Cosmology (srsly)

Model of Inflation.
img src:
I wrote my recent post on the “Unbearable Lightness of Philosophy” to introduce a paper summary, but it got somewhat out of hand. I don’t want to withhold the actual body of my summary though. The paper in question is

Before we start I have to warn you that the paper speaks a lot about realism and underdetermination, and I couldn’t figure out what exactly the authors mean with these words. Sure, I looked them up, but that didn’t help because there doesn’t seem to be an agreement on what the words mean. It’s philosophy after all.

Personally, I subscribe to a philosophy I’d like to call agnostic instrumentalism, which means I think science is useful and I don’t care what else you want to say about it – anything from realism to solipsism to Carroll’s “poetic naturalism” is fine by me. In newspeak, I’m a whateverist – now go away and let me science.

The authors of the paper, in contrast, position themselves as follows:
“We will first state our allegiance to scientific realism… We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that we accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs.”
But rather than explaining what this means, the authors next admit that this definition contains “vague words,” and apologize that they “will leave this general defense to more competent philosophers.” Interesting approach. A physics-paper in this style would say: “This is a research article about General Relativity which has something to do with curvature of space and all that. This is just vague words, but we’ll leave a general defense to more competent physicists.”

In any case, it turns out that it doesn’t matter much for the rest of the paper exactly what realism means to the authors – it’s a great paper also for an instrumentalist because it’s long enough so that, rolled up, it’s good to slap flies. The focus on scientific realism seems somewhat superfluous, but I notice that the paper is to appear in “The Routledge Handbook of Scientific Realism” which might explain it.

It also didn’t become clear to me what the authors mean by underdetermination. Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data (which is also what Wikipedia offers by way of definition). But the question what’s necessary to explain data isn’t a simple yes-or-no question – it’s a question that needs a quantitative analysis.

In theory development we always have a tension between simplicity (fewer assumptions) and precision (better fit) because more parameters normally allow for better fits. Hence we use statistical measures to find out in which case a better fit justifies a more complicated model. I don’t know how one can claim that a model is “underdetermined” without such quantitative analysis.

The authors of the paper for the most part avoid the need to quantify underdetermination by using sociological markers, ie they treat models as underdetermined if cosmologists haven’t yet agreed on the model in question. I guess that’s the best they could have done, but it’s not a basis on which one can discuss what will remain underdetermined. The authors for example seem to implicitly believe that evidence for a theory at high energies can only come from processes at such high energies, but that isn’t so – one can also use high precision measurements at low energies (at least in principle). In the end it comes down, again, to quantifying which model is the best fit.

With this advance warning, let me tell you the three main philosophical issues which the authors discuss.

1. Underdetermination of topology.

Einstein’s field equations are local differential equations which describe how energy-densities curve space-time. This means these equations describe how space changes from one place to the next and from one moment to the next, but they do not fix the overall connectivity – the topology – of space-time*.

A sheet of paper is a simple example. It’s flat and it has no holes. If you roll it up and make a cylinder, the paper is still flat, but now it has a hole. You could find out about this without reference to the embedding space by drawing a circle onto the cylinder and around its perimeter, so that it can’t be contracted to zero length while staying on the cylinder’s surface. This could never happen on a flat sheet. And yet, if you look at any one point of the cylinder and its surrounding, it is indistinguishable from a flat sheet. The flat sheet and the cylinder are locally identical – but they are globally different.

General Relativity thus can’t tell you the topology of space-time. But physicists don’t normally worry much about this because you can parameterize the differences between topologies, compute observables, and then compare the results to data. Topology is, in that, no different than any other assumption of a cosmological model. Cosmologists can, and have, looked for evidence of non-trivial space-time connectivity in the CMB data, but they haven’t found anything that would indicate our universe wraps around itself. At least so far.

In the paper, the authors point out an argument raised by someone else (Manchak) which claims that different topologies can’t be distinguished almost everywhere. I haven’t read the paper in question, but this claim is almost certainly correct. The reason is that while topology is a global property, you can change it on arbitrarily small scales. All you have to do is punch a hole into that sheet of paper, and whoops, it’s got a new topology. Or if you want something without boundaries, then identify two points with each other. Indeed you could sprinkle space-time with arbitrarily many tiny wormholes and in that way create the most abstruse topological properties (and, most likely, lots of causal paradoxa).

The topology of the universe is hence, like the topology of the human body, a matter of resolution. On distances visible to the eye you can count the holes in the human body on the fingers of your hand. On shorter distances though you’re all pores and ion channels, and on subatomic distances you’re pretty much just holes. So, asking what’s the topology of a physical surface only makes sense when one specifies at which distance scale one is probing this (possibly higher-dimensional) surface.

I thus don’t think any physicist will be surprised by the philosophers’ finding that cosmology severely underdetermines global topology. What the paper fails to discuss though is the scale-dependence of that conclusion. Hence, I would like to know: Is it still true that the topology will remain underdetermined on cosmological scales? And to what extent, and under which circumstances, can the short-distance topology have long-distance consequences, as eg suggested by the ER=EPR idea? What effect would this have on the separation of scales in effective field theory?

2. Underdetermination of models of inflation.

The currently most widely accepted model for the universe assumes the existence of a scalar field – the “inflaton” – and a potential for this field – the “inflation potential” – in which the field moves towards a minimum. While the field is getting there, space is exponentially stretched. At the end of inflation, the field’s energy is dumped into the production of particles of the standard model and dark matter.

This mechanism was invented to solve various finetuning problems that cosmology otherwise has, notably that the universe seems to be almost flat (the “flatness problem”), that the cosmic microwave background has the almost-same temperature in all directions except for tiny fluctuations (the “horizon problem”), and that we haven’t seen any funky things like magnetic monopoles or domain walls that tend to be plentiful at the energy scale of grand unification (the “monopole problem”).

Trouble is, there’s loads of inflation potentials that one can cook up, and most of them can’t be distinguished with current data. Moreover, one can invent more than one inflation field, which adds to the variety of models. So, clearly, the inflation models are severely underdetermined.

I’m not really sure why this overabundance of potentials is interesting for philosophers. This isn’t so much philosophy as sociology – that the models are underdetermined is why physicists get them published, and if there was enough data to extract a potential that would be the end of their fun. Whether there will ever be enough data to tell them apart, only time will tell. Some potentials have already been ruled out with incoming data, so I am hopeful.

The questions that I wish philosophers would take on are different ones. To begin with, I’d like to know which of the problems that inflation supposedly solves are actual problems. It only makes sense to complain about finetuning if one has a probability distribution. In this, the finetuning problem in cosmology is distinctly different from the finetuning problems in the standard model, because in cosmology one can plausibly argue there is a probability distribution – it’s that of fluctuations of the quantum fields which seed the initial conditions.

So, I believe that the horizon problem is a well-defined problem, assuming quantum theory remains valid close by the Planck scale. I’m not so sure, however, about the flatness problem and the monopole problem. I don’t see what’s wrong with just assuming the initial value for the curvature is tiny (finetuned), and I don’t know why I should care about monopoles given that we don’t know grand unification is more than a fantasy.

Then, of course, the current data indicates that the inflation potential too must be finetuned which, as Steinhardt has aptly complained, means that inflation doesn’t really solve the problem it was meant to solve. But to make that statement one would have to compare the severity of finetuning, and how does one do that? Can one even make sense of this question? Where are the philosophers if one needs them?

Finally, I have a more general conceptual problem that falls into the category of underdetermination, which is to which extent the achievements of inflation are actually independent of each other. Assume, for example, you have a theory that solves the horizon problem. Under which circumstances does it also solve the flatness problem and gives the right tilt for the spectral index? I suspect that the assumptions for this do not require the full mechanism of inflation with potential and all, and almost certainly not a very specific type of potential. Hence I would like to know what’s the minimal theory that explains the observations, and which assumptions are really necessary.

3. Underdetermination in the multiverse.

Many models for inflation create not only one universe, but infinitely many of them, a whole “multiverse”. In the other universes, fundamental constants – or maybe even the laws of nature themselves – can be different. How do you make predictions in a multiverse? You can’t, really. But you can make statements about probabilities, about how likely it is that we find ourselves in this universe with these particles and not any other.

To make statements about the probability of the occurrence of certain universes in the multiverse one needs a probability distribution or a measure (in the space of all multiverses or their parameters respectively). Such a measure should also take into account anthropic considerations, since there are some universes which are almost certainly inhospitable for life, for example because they don’t allow the formation of large structures.

In their paper, the authors point out that the combination of a universe ensemble and a measure is underdetermined by observations we can make in our universe. It’s underdetermined in the same what that if I give you a bag of marbles and say the most likely pick is red, you can’t tell what’s in the bag.

I think physicists are well aware of this ambiguity, but unfortunately the philosophers don’t address why physicists ignore it. Physicists ignore it because they believe that one day they can deduce the theory that gives rise to the multiverse and the measure on it. To make their point, the philosophers would have had to demonstrate that this deduction is impossible. I think it is, but I’d rather leave the case to philosophers.

For the agnostic instrumentalist like me a different question is more interesting, which is whether one stands to gain anything from taking a “shut-up-and-calculate” attitude to the multiverse, even if one distinctly dislikes it. Quantum mechanics too uses unobservable entities, and that formalism –however much you detest it – works very well. It really adds something new, regardless of whether or not you believe the wave-function is “real” in some sense. For what the multiverse is concerned, I am not sure about this. So why bother with it?

Consider the best-case multiverse outcome: Physicists will eventually find a measure on some multiverse according to which the parameters we have measured are the most likely ones. Hurray. Now forget about the interpretation and think of this calculation as a black box: You put in math one side and out comes a set of “best” parameters the other side. You could always reformulate such a calculation as an optimization problem which allows one to calculate the correct parameters. So, independent of the thorny question of what’s real, what do I gain from thinking about measures on the multiverse rather than just looking for an optimization procedure straight away?

Yes, there are cases – like bubble collisions in eternal inflation – that would serve as independent confirmation for the existence of another universe. But no evidence for that has been found. So for me the question remains: under which circumstances is doing calculations in the multiverse an advantage rather than unnecessary mathematical baggage?

I think this paper makes a good example for the difference between philosophers’ and physicists’ interests which I wrote about in my previous post. It was a good (if somewhat long) read and it gave me something to think, though I will need some time to recover from all the -isms.

* Note added: The word connectivity in this sentence is a loose stand-in for those who do not know the technical term “topology.” It does not refer to the technical term “connectivity.”

Friday, August 12, 2016

The Unbearable Lightness of Philosophy

Philosophy isn’t useful for practicing physicists. On that, I am with Steven Weinberg and Lawrence Krauss who have expressed similar opinions. But I think it’s an unfortunate situation because physicists – especially those who work on the foundations of physics – could need help from philosophers.

Massimo Pigliucci, a Prof for Philosophy at CUNY-City College, has ingeniously addressed physicists’ complaints about the uselessness of philosophy by declaring that “the business of philosophy is not to advance science.” Philosophy, hence, isn’t just useless, but it’s useless on purpose. I applaud. At least that means it has a purpose.

But I shouldn’t let Massimo Pigliucci speak for his whole discipline.

I’ve been told for what physics is concerned there are presently three good philosophers roaming Earth: David Albert, Jeremy Butterfield, and Tim Maudlin. It won’t surprise you to hear that I have some issues to pick with each of these gentlemen, but mostly they seem reasonable indeed. I would even like to nominate a fourth Good Philosopher, Steven Weinstein from UoW, with whom even I haven’t yet managed to disagree.

The good Maudlin, for example, had an excellent essay last year on PBS NOVA, in which he argued that “Physics needs Philosophy.” I really liked his argument until he wrote that “Philosophers obsess over subtle ambiguities of language,” which pretty much sums up all that physicists hate about philosophy.

If you want to know “what follows from what,” as Maudlin writes, you have to convert language into mathematics and thereby remove the ambiguities. Unfortunately, philosophers never seem to take that step, hence physicists’ complaints that it’s just words. Or, as Arthur Koestler put it, “the systematic abuse of a terminology specially invented for that purpose.”

Maybe, I admit, it shouldn’t be the philosophers’ job to spell out how to remove the ambiguities in language. Maybe that should already be the job of physicists. But regardless of whom you want to assign the task of reaching across the line, presently little crosses it. Few practicing physicists today care what philosophers do or think.

And as someone who has tried to write about topics on the intersection of both fields, I can report that this disciplinary segregation is meanwhile institutionalized: The physics journals won’t publish on the topic because it’s too much philosophy, and the philosophy journals won’t publish because it’s too much physics.

In a recent piece on Aeon, Pigliucci elaborates on the demarcation problem, how to tell science from pseudoscience. He seems to think this problem is what underlies some physicists’ worries about string theory and the multiverse, worries that were topic of a workshop that both he and I attended last year.

But he got it wrong. While I know lots of physicists critical of string theory for one reason or the other, none of them would go so far to declare it pseudoscience. No, the demarcation problem that physicists worry about isn’t that between science and pseudoscience. It’s that between science and philosophy. It is not without irony that Pigliucci in his essay conflates the two fields. Or maybe the purpose of his essay was an attempt to revive the “string wars,” in which case, wake me when it’s over.

To me, the part of philosophy that is relevant to physics is what I’d like to call “pre-science” – sharpening questions sufficiently so that they can eventually be addressed by scientific means. Maudlin in his above mentioned essay expressed a very similar point of view.

Philosophers in that area are necessarily ahead of scientists. But they also never get the credit for actually answering a question, because for that they’ll first have to hand it over to scientists. Like a psychologist, thus, the philosopher of physics succeeds by eventually making themselves superfluous. It seems a thankless job. There’s a reason I preferred studying physics instead.

Many of the “bad philosophers” are those who aren’t quick enough to notice that a question they are thinking about has been taken over by scientists. That this failure to notice can evidently persist, in some cases, for decades is another institutionalized problem that originates in the lack of communication between both fields.

Hence, I wish there were more philosophers willing to make it their business to advance science and to communicate across the boundaries. Maybe physicists would complain less that philosophy is useless if it wasn’t useless.