Monday, May 30, 2016

Book Review: “Why String Theory?” by Joseph Conlon

Why String Theory?
By Joseph Conlon
CRC Press (November 24, 2015)

I was sure I’d hate the book. Let me explain.

I often hear people speak about the “marketplace of ideas” as if science was a trade show where researchers sell their work. But science isn’t about manufacturing and selling products, it’s about understanding nature. And the sine qua non for evaluating the promise of an idea is objectivity.

In my mind, therefore, the absolutely last thing that scientists should engage in is marketing. Marketing, advertising, and product promotion are commercial tactics with the very purpose of affecting their targets’ objectivity. These tactics shouldn’t have any place in science.

Consequently, I have mixed feelings about scientists who attempt to convince the public that their research area is promising, with the implicit or explicit goal of securing funding and attracting students. It’s not that I have a problem with scientists who write for the public in general – I have a problem with scientists who pass off their personal opinion as fact, often supporting their conviction by quoting the number of people who share their beliefs.

In the last two decades this procedure has created an absolutely astonishing amount of so-called “science” books about string theory, supersymmetry, the multiverse and other fantasies (note careful chosen placement of commata), with no other purpose than asking the reader to please continue funding fruitless avenues of research by appealing to lofty ideals like elegance and beauty.

And indeed, Conlon starts with dedicating the book to “the taxpayers of the UK without whom this book could never have been written” and then states explicitly that his goal is to win the favor of taxpayers:
“I want to explain, to my wonderful fellow citizens who support scientific research through their taxes, why string theory is so popular, and why, despite the lack of direct empirical support, it has attained the level of prominence it has.”

That’s on page six. The prospect of reading 250 pages filled with a string theorists’ attempt to lick butts of his “wonderful fellow citizens” made me feel somewhat nauseous. I put the book aside and instead read Sean Carroll’s new book. After that I felt slightly better and made a second attempt at Why String Theory?

Once I got past the first chapter, however, the book got markedly better. Conlon keeps the introduction to basic physics (relativity and quantum theory) to an absolute minimum. After this he lays out the history of string theory, with its many twists and turns, and explains how much string theorists’ understanding of the approach has changed within the decades.

He then gets to the reasons why people work on string theory. The first reason he lists is a chapter titled “Direct Experimental Evidence for String Theory” which consists of the single sentence “There is no direct experimental evidence for string theory.” At first, I thought that he might have wanted to point out that string theorists work on it despite the lack of evidence, but that the previous paragraph accidentally made it look as if he, rather cynically, wanted to say that the absence of evidence is the main reason they work on it.

But actually he returns to this point later in the book (in section 10.5), where he addresses “objections made concerning connection to experiment” and points out very clearly that even though these are prevalent, he thinks these deserve little or no sympathy. This makes me think, maybe he indeed wanted to say that he suspects the main reason so many people work on string theory is because there’s no evidence for it. Especially the objection that it is “too early” to seek experimental support for string theory because the theory is not fully understood he responds to with:
“The problem with this objection is that it is a time-invariant statement. It was made thirty years ago, it was made twenty years ago, it was made a decade ago, and it is made now. It is also, by observation, an objection made by those who are uninterested in observation. Muscles that are never used waste away. It is like never commencing a journey because one is always waiting for better modes of transportation, and in the end produces a community of scientists where the language of measurement and experiment is one that may be read but cannot be spoken.”
Conlon writes that he himself isn’t particularly interested in quantum gravity. His own research is finding evidence for moduli fields in cosmology, and he has a chapter about this. He lists the usual arguments in favor of string theory, that it connects well to both general relativity and the standard model, that it’s been helpful in deriving some math theorems, and that now there is the AdS/CFT duality by help of which one might maybe one day be able to describe some aspect of the real world.

He somehow forgets to mention that the AdS/CFT predictions for heavy ion collisions at the LHC turned out to be dramatically wrong, and by now very few people think that the duality is of much use in this area. I actually suspect he just plainly didn’t know this. It’s not something that string theorists like to talk about. This omission is my major point of criticism. The rest of the book seems a quite balanced account, and he restrains from making cheap arguments of the type that the theory must be right because a thousand people with brains can’t be mistaken. Conlon even has a subsection addressing Witten-cult, which is rather scathing, and a hit on Arkani-Hamed gathering 5000 citations and a $3 million price for proposing large extra dimensions (an idea that was quietly buried after the LHC ruled it out).

At the end of the book Conlon has a chapter addressing explicit criticisms – he manages to remain remarkably neutral and polite – and a “fun” chapter in which he lists different styles of doing research. Maybe there’s something wrong with my sense of humor but I didn’t find it much fun. It’s more like he is converting Kuhn’s phases of “normal science” and “revolution” into personal profiles, trying to reassure students that they don’t need to quantize gravity to get tenure.

Leaving aside Conlon’s fondness of mixing up sometimes rather odd metaphors (“quantum mechanics is a jealous theory... it has spread through the population of scientific theories like a successful mutation” – “The anthropic landscape... represents incontinence of speculation joined to constipation of experiment.” – “quantum field theorists became drunk on the new wine of string theory”) and an overuse of unnecessary loanwords (in pectore, pons asinorum, affaire de coer, lebensraum, mirabile dictum, for just to mention a few), the book is reasonably well written. The reference list isn’t too extensive. This is to say in the couple of cases in which I wanted to look up a reference it wasn’t listed, and the one case I wanted to check a quotation it didn’t have an original source.

Altogether, Why String Theory? gives the reader a mostly fair and balanced account of string theory, and a pretty good impression for just how much the field has changed since Brian Greene’s Elegant Universe. I looked up something in Greene’s book the other day, and found him complaining that the standard model is “too flexible.” Oh, yes, things have changed a lot since. I doubt it’s a complaint any string theorist dare raise today.

In the end, I didn’t hate Conlon’s book. Maybe I’m getting older, or maybe I’m getting wiser, or maybe I’m just not capable of hating books.

[Disclaimer: Free review copy.]


Win a copy of Why String Theory by Joseph Conlon!

I had bought the book before I was sent the review copy, and so I have a second copy of the book, entirely new and untouched. You can win the book if you are the first to answer this question correctly: Who was second author on the first paper to point out that some types of neutrino detectors might also be used to directly detect certain candidate particles for dark matter? Submit answer in the comments, do not send an email. The time-stamp of the comment counts. (Please only submit an answer if you are willing to send me a postal address to which the book can be shipped.)

Update: The book is gone!

Away Note

I have a trip upcoming to Helsinki. After this I'll be tied up in family business, and then my husband goes on a business trip and I have the kids alone. Then Kindergarten will be closed for a day (forgot why, I'm sure they must have some reason), I have to deal with an ant-infection in our apartment, and more family business follows. In summary: busy times.

I have a book review to appear on this blog later today, but after this you won't hear much from me for a week or two. Keep in mind that since I have comment moderation on, it might take some while for your comment to appear when I am traveling. With thanks for your understanding, here's a random cute pic of Gloria :)


Thursday, May 26, 2016

How can we test quantum gravity?

If you have good eyes, the smallest objects you can make out are about a tenth of a millimeter, roughly the width of a human hair. Add technology, and the smallest structures we have measured so far are approximately 10-19m, that’s the wavelength of the protons collided at the LHC. It has taken us about 400 years from the invention of the microscope to the construction of the LHC – 400 years to cross 15 orders of magnitude.

Quantum effects of gravity are estimated to become relevant on distance scales of approximately 10-35m, known as the Planck length. That’s another 16 orders of magnitude to go. It makes you wonder whether it’s possible at all, or whether all the effort to find a quantum theory of gravity is just idle speculation.

I am optimistic. The history of science is full with people who thought things to be impossible that have meanwhile been done: measuring the light deflection on the sun, heavier-than-air flying machines, detecting gravitational waves. Hence, I don’t think it’s impossible to experimentally test quantum gravity. Maybe it will take some decades, or maybe it will take some centuries – but if only we keep pushing, one day we will measure quantum gravitational effects. Not by directly crossing these 15 orders of magnitude, I believe, but instead by indirect detections at lower energies.

From nothing comes nothing though. If we don’t think about how quantum gravitational effects can look like and where they might show up, we’ll certainly never find them. But fueling my optimism is the steadily increasing interest in the phenomenology of quantum gravity, the research area dedicated to studying how to best find evidence for quantum gravitational effects.

Since there isn’t any one agreed-upon theory for quantum gravity, existing efforts to find observable phenomena focus on finding ways to test general features of the theory, properties that have been found in several different approaches to quantum gravity. Quantum fluctuations of space-time, for example, or the presence of a “minimal length” that would impose a fundamental resolution limit. Such effects can be quantified in mathematical models, which can then be used to estimate the strength of the effects and thus to find out which experiments are most promising.

Testing quantum gravity has long thought to be out of reach of experiments, based on estimates that show it would take a collider the size of the Milky Way to accelerate protons enough to produce a measureable amount of gravitons (the quanta of the gravitational field), or that we would need a detector the size of planet Jupiter to measure a graviton produced elsewhere. Not impossible, but clearly not something that will happen in my lifetime.

One testable consequence of quantum gravity might be, for example, the violation of the symmetry of special and general relativity, known as Lorentz-invariance. Interestingly it turns out that violations of Lorentz-invariance are not necessarily small even if they are created at distances too short to be measurable. Instead, these symmetry violations seep into many particle reactions at accessible energies, and these have been tested to extremely high accuracy. No evidence for violations of Lorentz-invariance have been found. This might sound like not much, but knowing that this symmetry has to be respected by quantum gravity is an extremely useful guide in the development of the theory.

Other testable consequences might be in the weak-field limit of quantum gravity. In the early universe, quantum fluctuations of space-time would have led to temperature fluctuation of matter. And these temperature fluctuations are still observable today in the Cosmic Microwave Background (CMB). The imprint of such “primordial gravitational waves” on the CMB has not yet been measured (LIGO is not sensitive to them), but they are not so far off measurement precision.

A lot of experiments are currently searching for this signal, including BICEP and Planck. This raises the question whether it is possible to infer from the primordial gravitational waves that gravity must have been quantized in the early universe. Answering this question is one of the presently most active areas in quantum gravity phenomenology.

Also testing the weak-field limit of quantum gravity are attempts to bring objects into quantum superpositions that are much heavier than elementary particles. This makes the gravitational field stronger and potentially offers the chance to probe its quantum behavior. The heaviest objects that have so far been brought into superpositions weigh about a nano-gram, which is still several orders of magnitude too small to measure the gravitational field. But a group in Vienna recently proposed an experimental scheme that would allow to measure the gravitational field more precisely than ever before. We are slowly closing in on the quantum gravitational range.

Such arguments however merely concern the direct detection of gravitons, and that isn’t the only manifestation of quantum gravitational effects. There are various other observable consequences that quantum gravity could give rise to, some of which have already been looked for, and others that we plan to look for. So far, we have only negative results. But even negative results are valuable because they tell us what properties the sought-for theory cannot have.

[From arXiv:1602.07539, for details, see here]

The weak field limit would prove that gravity really is quantized and finally deliver the much-needed experimental evidence, confirming that we’re not just doing philosophy. However, for most of us in the field the strong gravity limit is more interesting. With strong gravity limit I mean Planckian curvature, which (not counting those galaxy-sized colliders) can only be found close by the center of black holes and towards the big bang.

(Note that in astrophysics, “strong gravity” is sometimes used to mean something different, referring to large deviations from Newtonian gravity which can be found, eg, around the horizon of black holes. In comparison to the Planckian curvature required for strong quantum gravitational effects, this is still exceedingly weak.)

Strong quantum gravitational effects could also have left an imprint in the cosmic microwave background, notably in the type of correlations that can be found in the fluctuations. There are various models of string cosmology and loop quantum cosmology that have explored the observational consequences, and proposed experiments like EUCLID and PRISM might find first hints. Also the upcoming experiments to test the 21-cm hydrogen absorption could harbor information about quantum gravity.

A somewhat more speculative idea is based on a recent finding according to which the gravitational collapse of matter might not always form a black hole, but could escape the formation of a horizon. If that is so, then the remaining object would give us open view on a region with quantum gravitational effects. It isn’t yet clear exactly what signals we would have to look for to find such an object, but this is promising research direction because it could give us direct access to strong space-time curvature.

There are many other ideas out there. A large class of models for example deals with the possibility that quantum gravitational effects endow space-time with the properties of a medium. This can lead to the dispersion of light (colors running apart), birefringence (polarizations running apart), decoherence (preventing interference), or an opacity of otherwise empty space. More speculative ideas include Craig Hogan’s quest for holographic noise, Bekenstein’s table-top experiment that searches for Planck-length discreteness, or searches for evidence of a minimal length in tritium decay. Some general properties that have recently been found and that we yet have to find good experimental tests for are geometric phase transitions in the early universe, or dimensional reduction.

Without doubt, there is much that remains to be done. But we’re on the way.

[This post previously appeared on Starts With A Bang.]

Thursday, May 19, 2016

The Holy Grail of Crackpot Filtering: How the arXiv decides what’s science – and what’s not.

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:


In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:
“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don't know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. [...]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”
It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:
“We've actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme [...] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”
Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.

Sunday, May 15, 2016

Dear Dr B: If photons have a mass, would this mean special relativity is no longer valid?

Einstein and Lorentz.
[Image: Wikipedia]
“[If photons have a restmass] would that mean the whole business of the special theory of relativity being derived from the idea that light has to go at a particular velocity in order for it to exist/Maxwell’s identification of e/m waves as light because they would have to go at the appropriate velocity is no longer valid?”

(This question came up in the discussion of a recent proposal according to which photons with a tiny restmass might cause an effect similar to the cosmological constant.)

Dear Brian,

The short answer to your question is “No.” If photons had a restmass, special relativity would still be as valid as it’s always been.

The longer answer is that the invariance of the speed of light features prominently in the popular explanations of special relativity for historic reasons, not for technical reasons. Einstein was lead to special relativity contemplating what it would be like to travel with light, and then tried to find a way to accommodate an observer’s motion with the invariance of the speed of light. But the derivation of special relativity is much more general than that, and it is unnecessary to postulate that the speed of light is invariant.

Special relativity is really just physics in Minkowski space, that is the 4-dimensional space-time you obtain after promoting time from a parameter to a coordinate. Einstein wanted the laws of physics to be the same for all inertial observers in Minkowski-space, ie observers moving at constant velocity. If you translate this requirement into mathematics, you are lead to ask for the symmetry transformations in Minkowski-space. These transformations form a group – the Poincaré-group – from which you can read off all the odd things you have heard of: time-dilatation, length-contraction, relativistic mass, and so on.

The Poincaré-group itself has two subgroups. One contains just translations in space and time. This tells you that if you have an infinitely extended and unchanging space then it doesn’t matter where or when you do your experiment, the outcome will be the same. The remaining part of the Poincaré-group is the Lorentz-group. The Lorentz-group contains rotations – this tells you it doesn’t matter in which direction you turn, the laws of nature will still be the same. Besides the rotations, the Lorentz-group contains boosts, that are basically rotations between space and time. Invariance under boosts tells you that it doesn’t matter at which velocity you move, the laws of nature will remain the same. It’s the boosts where all the special relativistic fun goes on.

Deriving the Lorentz-group, if you know how to do it, is a three-liner, and I assure you it has absolutely nothing to do with rocket ships and lasers and so on. It is merely based on the requirement that the metric of Minkowski-space has to remain invariant. Carry through with the math and you’ll find that the boosts depend on a free constant with the dimension of a speed. You can further show that this constant is the speed of massless particles.

Hence, if photons are massless, then the constant in the Lorentz-transformation is the speed of light. If photons are not massless, then the constant in the Lorentz-transformation is still there, but not identical to the speed of light. We already know however that these constants must be identical to very good precision, which is the same as saying the mass of photons must be very small.

Giving a mass to photons is unappealing not because it violates special relativity – it doesn’t – but because it violates gauge-invariance, the most cherished principle underlying the standard model. But that’s a different story and shall be told another time.

Thanks for an interesting question!

Monday, May 09, 2016

Book review: “The Big Picture” by Sean Carroll

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself
Sean Carroll
Dutton (May 10, 2016)

Among the scientific disciplines, physics is unique: Concerned with the most fundamental entities, its laws must be respected in all other areas of science. While there are many emergent laws which are interesting in their own right – from neurobiology to sociology – there is no doubt they all have to be compatible with energy conservation. And the second law of thermodynamics. And quantum mechanics. And the standard model better be consistent with whatever you think are the neurological processes that make you “you.” There’s no avoiding physics.

In his new book, The Big Picture Sean explains just why you can’t ignore physics when you talk about extrasensory perception, consciousness, god, afterlife, free will, or morals. In the first part, Sean lays out what, to our best current knowledge, the fundamental laws of nature are, and what their relevance is for all other emergent laws. In the later parts he then goes through the consequences that follow from this.

On the way from quantum field theory to morals, he covers what science has to say about complexity, the arrow of time, and the origin of life. (If you attended the 2011 FQXi conference, parts will sound very familiar.) Then, towards the end of the book, he derives advice from his physics-based philosophy – which he calls “poetic naturalism” – for finding “meaning” in life and finding a “good” way to organize our living together (scare quotes because these words might not mean what you think they mean). His arguments rely heavily on Bayesian reasoning, so you better be prepared to update your belief system while reading.

The Big Picture is, above everything, a courageous book – and an overdue one. I have had many arguments about exactly the issues that Sean addresses in his book – from “qualia” to “downwards causation” – but I neither have the patience nor the interest to talk people out of their cherished delusions. I’m an atheist primarily because I think religion would be wasting my time, time that I’d rather spend on something more insightful. Trying to convince people that their beliefs are inconsistent would also be wasting my time, hence I don’t. But if I did, I almost certainly wouldn’t be able to remain as infallibly polite as Sean.

So, I am super happy about this book. Because now, whenever someone brings up Mary The Confused Color Scientist who can’t tell sensory perception from knowledge about that perception, I’ll just – politely – tell them to read Sean’s book. The best thing I learned from The Big Picture is that apparently Franck Jackson, the philosopher who came up with The Color Scientist, eventually conceded himself that the argument was wrong. The world of philosophy indeed sometimes moves! Time then, to stop talking about qualia.

I really wish I had found something to disagree with in Sean’s book, but the only quibble I have (you won’t be surprised to hear) is that I think what Sean-The-Compatibilist calls “free will” doesn’t deserve being called “free will.” Using the adjective “free” strongly suggests an independence from the underlying microscopic laws, and hence a case of “strong emergence” – which is an idea that should go into the same bin as qualia. I also agree with Sean however that fighting about the use of words is moot.

(The other thing I’m happy about is that, leaving aside the standard model and general relativity, Sean’s book has almost zero overlap with the book I’m writing. *wipes_sweat_off_forehead*. Could you all please stop writing books until I’m done, it makes me nervous.)

In any case, it shouldn’t come as a surprise that I agree so wholeheartedly with Sean because I think everybody who open-mindedly looks at the evidence – ie all we currently know about the laws of nature – must come to the same conclusions. The main obstacle in conveying this message is that most people without training in particle physics don’t understand effective field theory, and consequently don’t see what this implies for the emergence of higher level laws. Sean does a great job overcoming this obstacle.

I wish I could make myself believe that after the publication of Sean’s book I’ll never again have to endure someone insisting there must be something about their experience that can’t be described by a handful of elementary particles. But I’m not very good at making myself believe in exceedingly unlikely scenarios, whether that’s the existence of an omniscient god or the ability of humans to agree on how unlikely this existence is. At the very least however, The Big Picture should make clear that physicists aren’t just arrogant when they say their work reveals insights that reach far beyond the boundaries of their discipline. Physics indeed has an exceptional status among the sciences.

[Disclaimer: Free review copy.]

Tuesday, May 03, 2016

Experimental Search for Quantum Gravity 2016

I am happy to announce that this year we will run the 5th international conference on Experimental Search for Quantum Gravity here in Frankfurt, Germany. The meeting will take place Sep 19-23, 2016.

We have a (quite preliminary) website up here. Application is now open and will run through June 1st. If you're a student or young postdoc with an interest in the phenomenology of quantum gravity, this conference might be a good starting point and I encourage you to apply. We cannot afford handing out travel grants, but we will waive the conference fee for young participants (young in terms of PhD age, not biological age).

The location of the meeting will be at my new workplace, the Frankfurt Institute for Advanced Studies, FIAS for short. When it comes to technical support, they seem considerably better organized (not to mention staffed) than my previous institution. At this stage I am thus tentatively hopeful that this year we'll both record and livestream the talks. So stay tuned, there's more to come.