Monday, April 30, 2012

Interna



Spring came late to Germany, but it seems it finally has arrived. The 2012 Riesling has the first leaves and the wheat is a foot high.

Lara and Gloria are now 16 months old, almost old enough so we should start counting their age in fraction of years. This month's news is Lara's first molar, and Gloria's first word:



I have been busy writing a proposal for the Swedish Research Council, which is luckily submitted now, and I also had a paper accepted for publication. Ironically, from all the papers that I wrote in the last years, it's the one that is the least original and cost me the least amount of time, yet it's the only one that smoothly went through peer review.



Besides this, I'm spending my time with the organization of a workshop, a conference, and a four-week long program. I'm also battling a recurring ant infection of our apartment, which is complicated by my hesitation to distribute toxins where the children play.

Friday, April 27, 2012

The Nerdly Painter's Blog

In expecto weekendum, I want to share with you the link of Regina Valluzzi'a blog Nerdly Painter. Regina has a BS in Materials Science from MIT and PhD in Polymer Science from University of Massachusetts Amherst, and she does the most wonderful science-themed paintings I've seen. A teaser below. Go check out her blog and have a good start into the weekend!

Wednesday, April 25, 2012

The Cosmic Ray Composition Problem

A recent arXiv paper provides an update on the cosmic ray composition problem:

First the basics: We're talking about the ultra-high energetic end of the cosmic ray spectrum, with total energies of about 106 TeV. That's the energy of the incident particles in the Earth rest frame, not the center-of-mass energy of their collision with air molecules (ie mostly nucleons), which is "only" of the order 10 TeV, and thus somewhat larger than what the LHC delivers.

After the primary collision, the incoming particles produce a cascade of secondary particles, known as a "cosmic ray shower" which can be detected on the ground. These showers are then reconstructed from the data with suitable software so that, ideally, the physics of the initial high energy collison can be extracted. For some more details on cosmic ray showers, please read this earlier post.

Cosmic ray shower, artist's impression. Source: ASPERA

The Pierre Auger Cosmic Ray Observatory is a currently running experiment that measures cosmic ray showers on the ground. One relevant quantity about the cosmic rays is the "penetration depth," that is the distance the primary particle travels through the atmosphere till it makes the first collision. The penetration depth can be reconstructed if the shower on the ground can be measured sufficiently precise, and is relatively new data.

The penetration depth depends on the probability of the primary particle to interact, and with that on the nature of the particle. While we have never actually tested the collisions at the center-of-mass energies of the highest energetic cosmic rays, we think we have a pretty good understanding of what's going on by virtue of the standard model of particle physics. All the knowledge that we have, based on measurements at lower energies, is incorporated into the numerical models. Since the collisions involve nucleons rather than elementary particles, this goes together with an extrapolation of the parton distribution function by the DGLAP equation. This sounds complicated, but since QCD is asymptotically free, it should actually get easier to understand at high energies.

Shaham and Piran in their paper argue that this extrapolation isn't working as expected, which might be a signal for new physics.

The reason is that the penetration depth data shows that at high energies the probability of the incident particles to interact peaks at a shorter depth and is also more strongly peaked than one expects for protons. Now it might be that at higher energies the cosmic rays are dominated by other primary particles, heavier ones, that are more probable to interact, thus moving the peak of the distribution to a shorter depth. However, if one adds a contribution from other constituents (heavier ions: He, Fe...) this also smears out the distribution over the depth, and thus doesn't fit the width of the observed penetration depth distribution.

This can be seen very well from the figure below (Fig 2 from Shaham and Piran's paper) which shows the data from the Pierre Auger Collaboration, and the expectation for a composition of protons and Fe nuclei. You can see that adding a second component does have the desired effect of moving the average value to a shorter depth. But it also increases the width. (And, if the individual peaks can be resolved, produces a double-peak structure.)
Fig 2 from arXiv:1204.1488. Shown is the number of events in the energy bin 1 to 1.25 x 106 TeV as a function of the penetration depth. The red dots are the data from the Pierre Auger Collaboration (arXiv:1107.4804), the solid blue line is the expectation for a combination of protons and Fe nuclei.
The authors thus argue there is no compositions for the ultra high energetic primary cosmic ray particles that fits the data well. Shaham and Piram think that this mismatch should be taken seriously. While different simulations yield slightly different results, the results are comparable and neither code fits the data. If it's not the simulation, the mismatch comes about either from the data or the physics.
"There are three possible solutions to this puzzling situation. First, the observational data might be incorrect, or it is somehow dominated by poor statistics: these results are based on about 1500 events at the lowest energy bin and about 50 at the highest one. A mistake in the shower simulations is unlikely, as different simulations give comparable results. However, the simulations depend on the extrapolations of the proton cross sections from the measured energies to the TeV range of the UHECR collisions. It is possible that this extrapolation breaks down. In particular a larger cross section than the one extrapolated from low energies can explain the shorter penetration depth. This may indicates new physics that set in at energies of several dozen TeV."
The authors are very careful not to jump to conclusions, and I won't either. To be convinced there is new physics to find here, I would first like to see a quantification of how bad the best fit from the models actually is. Unfortunately, there's no chi-square/dof in the paper that would allow such a quantification, and as illustrative as the figure above is, it's only one energy bin and might be a misleading visualization. I am also not at all sure that the different simulations are actually independent from each other. Since scientific communities exchange information rapidly and efficiently, there exists a risk for systematic bias even if several models are considered. Possibly there's just some cross-section missing or wrong. Finally, there's nothing in the paper about how the penetration depth data is obtained to begin with. Since that's not a primary observable, there must be some modeling involved too, though I agree that this isn't a likely source of error.

With these words of caution ahead, it is possible that we are looking here at the first evidence for physics beyond the standard model.

Monday, April 23, 2012

Can we probe planck-scale physics with quantum optics?

You might have read about this some weeks ago on Chad Orzel's blog or at Ars Technica: Nature published a paper by Pikovski et al on the possibility to test Planck scale physics with quantum optics. The paper is on the arXiv under arXiv:1111.1979 [quant-ph]. I left a comment at Chad's blog explaining that it is implausible the proposed experiment will test any Planck scale effects. Since I am generally supportive of everybody who cares about quantum gravity phenomenology, I'd have left it at this, and be happy that Planck scale physics made it into Nature. But then I saw that Physics Today picked it up, and before this spreads further, here's an extended explanation of my skepticism.

Igor Pikovski et al have proposed a test for Planck scale physics using recent advances in quantum optics. The framework they use is a modification of quantum mechanics, expressed by a deformation of the canonical commutation relation, that takes into account that the Planck length plays the role of a minimal length. This is one of the most promising routes to quantum gravity phenomenology, and I was excited to read the article.

In their article, the authors claim that their proposed experiment is feasible to "probe the possible effects of quantum gravity in table-top quantum optics experiment" and that it reaches a "hitherto unprecedented sensitivity in measuring Planck-scale deformations." The reason for this increased sensitivity for Planck-scale effects is, according to the authors own words, that "the deformations are enhanced in massive quantum systems."

Unfortunately, this claim is not backed up by the literature the authors refer to.

The underlying reason is that the article fails to address the question of Lorentz-invariance. The deformation used is not invariant under normal Lorentz-transformations. There are two ways to deal with that, either breaking Lorentz-invariance or deforming it. If it is broken, there exists a multitude of very strong constraints that would have to be taken into account and are not mentioned in the article. Presumably then the authors implicitly assume that Lorentz-symmetry is suitably deformed in order to keep the commutation relations invariant - and in order to test something actually new. This can in fact be done, but comes at a price. Now the momenta transform non-linearly. Consequently, a linear sum of momenta is no longer Lorentz-invariant. In the appendix however, the authors have used the normal sum of momenta to define the center-of-mass momentum. This is inconsistent. To maintain Lorentz-invariance, the modified sum must be used.

This issue cannot be ignored for the following reason. If a suitably Lorentz-invariant sum is used, it contains higher-order terms. The relevance of these terms does indeed increase with the mass. This also means that the modification of the Lorentz-transformations become more relevant with the mass. Since this is a consequence of just summing up momenta, and has nothing in particular to do with the nature of the object that is being studied, the increasing relevance of corrections prevents one from reproducing a macroscopic limit that is in agreement with our knowledge of Special Relativity. This behavior of the sum, whose use, we recall, is necessary for Lorentz-invariance, is thus highly troublesome. This is known in the literature as the "soccer ball problem." It is not mentioned in the article.

If the soccer-ball problem persists, the theory is in conflict with observation already. While several suggestions have been made how this problem can be addressed in the theory, no agreement has been reached to date. A plausible and useful ad-hoc suggestion that has been made by Magueijo and Smolin is that the relevant mass scale, the Planck mass, for N particles is rescaled to N times the Planck mass. Ie, the scale where effects become large moves away when the number of particles increases.

Now, that this ad-hoc solution is correct is not clear. What is clear however is that, if the theory makes sense at all, the effect must become less relevant for systems with many constituents. A suppression with the number of constituents is a natural expectation.

If one takes into account that for sums of momenta the relevant scale is not the Planck mass, but N times the Planck mass, the effect the authors consider is suppressed by roughly a factor 1010. This means the existing bounds (for single particles) cannot be significantly improved in this way. This is the expectation that one can have from our best current understanding of the theory.

This is not to say that the experiment should not be done. It is always good to test new parameter regions. And, who knows, all I just said could turn out to be wrong. But it does mean that based on our current knowledge, it is extremely unlikely that anything new is to be found there. And vice versa, if nothing new is found, this cannot be used to rule out a minimal length modification of quantum mechanics.

(This is not the first time btw, that somebody tried to exploit the fact that the deviations get larger with mass by using composite systems, thereby promoting a bug to a feature. In my recent review, I have a subsection dedicated to this.)

Sunday, April 22, 2012

Experimental Search for Quantum Gravity 2012

It is my great pleasure to let you know that there will be a third conference on Experimental Search for Quantum Gravity, October 22 to 25, this year, at Perimeter Institute. (A summary of the ESQG 2007 is here, and a summary from 2010 is here.) Even better is that this time it wasn't my initiative but Astrid Eichhorn's, who is also to be credited for the theme "The hard facts." The third of the organizers is Lee Smolin, who has been of great help also with the last meeting. But most important, the website of the ESQG 2012 is here.

We have an open registration with a moderate fee of CAN$ 115, which is mostly to cover catering expenses. There is a limit to the number of people we can accommodate, so if you are interested in attending, I recommend you register early. If time comes, I'll tell you some more details about the meeting.

Thursday, April 19, 2012

Schrödinger meets Newton

In January, we discussed semi-classical gravity: Classical general relativity coupled to the expectation value of quantum fields. This theory is widely considered to be only an approximation to the still looked-for fundamental theory of quantum gravity, most importantly because the measurement process messes with energy conservation if one were to take it seriously, see earlier post for details.

However, one can take the point of view that whatever the theorists think is plausible or not should still be experimentally tested. Maybe the semi-classical theory does in fact correctly describe the way a quantum wave-function creates a gravitational field; maybe gravity really is classical and the semi-classical limit exact, we just don't understand the measurement process. So what effects would such a funny coupling between the classical and the quantum theory have?

Luckily, to find out it isn't really necessary to work with full general relativity, one can instead work with Newtonian gravity. That simplifies the issue dramatically. In this limit, the equation of interest is known as the Schrödinger-Newton equation. It is the Schrödinger-equation with a potential term, and the potential term is the gravitational field of a mass distributed according to the probability density of the wave-function. This looks like this


Inserting a potential that depends on the expectation value of the wave-function makes the Schrödinger-equation non-linear and changes its properties. The gravitational interaction is always attractive and thus tends to contract pressureless matter distributions. One expects this effect to show up here by contracting the wave-packet. Now the usual non-relativistic Schrödinger equation results in a dispersion for massive particles, so that an initially focused wave-function spreads with time. The gravitational self-coupling in the Schrödinger-Newton equation acts against this spread. Which one wins, the spread from the dispersion or the gravitational attraction, depends on the initial values.

However, the gravitational interaction is very weak, and so is the effect. For typical systems in which we study quantum effects, either the mass is not large enough for a collapse, or the typical time for it to take place is too long. Or so you are lead to think if you make some analytical estimates.

The details are left to a numerical study though because the non-linearity of the Schrödinger-Newton equation spoils the attempt to find analytical solutions. And so, in 2006 Carlip and Salzmann surprised the world by claiming that according to their numerical results, the contraction caused by the Schrödinger-Newton equation might be possible to observe in molecule interferometry, many orders of magnitude off the analytical estimate.

It took five years until a check of their numerical results came out, and then two papers were published almost simultaneously:
  • Schrödinger-Newton "collapse" of the wave function
    J. R. van Meter
    arXiv:1105.1579 [quant-ph]
  • Gravitationally induced inhibitions of dispersion according to the Schrödinger-Newton Equation
    Domenico Giulini and André Großardt
    arXiv:1105.1921 [gr-qc]
They showed independently that Carlip and Salzmann's earlier numerical study was flawed and the accurate numerical result fits with the analytical estimate very well. Thus, the good news is one understands what's going on. The bad news is, it's about 5 orders of magnitude off today's experimental possibilities. But that's in an area of physics were progress is presently rapid, so it's not hopeless!

It is interesting what this equation does, so let me summarize the findings from the new numerical investigation. These studies, I should add, have been done by looking at the spread of a spherical symmetric Gaussian wave-packet. The most interesting features are:
  • For masses smaller than some critical value, m less than ~ (ℏ2/(G σ))1/3, where σ is the width of the initial wave-packet, the entire wave-packet expands indefinitely.
  • For masses larger than that critical value, the wave-packet fragments and a fraction of the probability propagates outwards to infinity, while the rest remains localized in a finite region.
  • From the cases that eventually collapse, the lighter ones expand initially and then contract, the heavier ones contract immediately.
  • The remnant wave function approaches a stationary state, about which it performs dampened oscillations.
That the Schrödinger-Newton equation leads to a continuous collapse might lead one to think it could play a role for the collapse of the wave-function, an idea that has been suggested already in 1984 by Lajos Diosi. However, this interpretation is questionable because it became clear later that the gravitational collapse that one finds here isn't suitable to be interpreted as a wave-function collapse to an eigenstate. For example, in this 2002 paper, it was found that two bumps of probability density, separated by some distance, will fall towards each other and meet in the middle, rather than focus on one of the two initial positions as one would expect for a wave-function collapse.

Monday, April 16, 2012

The hunt for the first exoplanet

The little prince
Today, extrasolar planets, or exoplanets for short, are all over the news. Hundreds are known, and they are cataloged in The Extrasolar Planets Encyclopaedia, accessible for everyone who is interested. Some of these extrasolar planets orbit a star in what is believed to be a habitable zone, fertile ground for the evolution of life. Planetary systems, much like ours, have turned out to be much more common results of stellar formation than had been expected.

But the scientific road to this discovery has been bumpy.

Once one knows that stars on the night sky are suns like our own, it doesn't take a big leap of imagination to think that they might be accompanied by planets. Observational evidence for exoplanets was looked for already in the 19th century, but the field had a bad start.

Beginning in the 1950s, several candidates for exoplanets made it into the popular press, yet they turned out to be data flukes. At that time, the experimental method used relied on detecting minuscule changes in the motion of the star caused by a heavy planet of Jupiter type.

If you recall the two-body problem from 1st semester: It's not that one body orbits the other, but they both orbit around their common center-of-mass, just that, if one body is much heavier than the other, it might almost look like the lighter one is orbiting the heavier one. But if a sufficiently heavy planet orbits a star, one might in principle find out by watching the star very closely because it wobbles around the center-of-mass. In the 50s, watching the star closely meant watching its distance to other stellar objects. The precision which could be achieved this way simply wasn't sufficient to reliably tell the presence of a planet.

In the early 80s, Gordon Walker and his postdoc Bruce Campbell from British Columbia, Canada, pioneered a new technique that improved the possible precision by which the motion of the star could be tracked by two orders of magnitude. Their new technique relied on measuring the star's absorption lines, whose frequency depends on the motion of the star relative to us because of the Doppler effect.

To make that method work, Walker and Campbell had to find a way to precisely compare spectral images taken at different times so they'd know how much the spectrum had shifted. They found an ingenious solution to that: They would used the, very regular and well-known, molecular absorption lines of hydrogen fluoride gas. The comb-like absorption lines of hydrogen fluoride served as a ruler relative to which they could measure the star's spectrum, allowing them to detect even smallest changes. Then, together with astronomer Stephenson Yang, they started looking at candidate stars which might be accompanied by Jupiter-like planets.

To detect the motion of the star due to the planet, they would have to record the system for several completed orbits. Our planet Jupiter needs about 12 years to orbit the sun, so they were in for a long-term project. Unfortunately, they had a hard time finding support for their research.

In his recollection “The First High-Precision Radial Velocity Search for Extra-Solar Planets” (arXiv:0812.3169), Gordon Walker recounts that it was difficult to get time for their project at observatories: “Since extra-solar planets were expected to resemble Jupiter in both mass and orbit, we were awarded only three or four two-night observing runs each year.” And though it is difficult to understand today, back then many of Walker's astronomer colleagues thought the search for exoplanets a waste of time. Walker writes:
“It is quite hard nowadays to realise the atmosphere of skepticism and indifference in the 1980s to proposed searches for extra-solar planets. Some people felt that such an undertaking was not even a legitimate part of astronomy. It was against such a background that we began our precise radial velocity survey of certain bright solar-type stars in 1980 at the Canada France Hawaii 3.6-m Telescope.”

After years of data taking, they had identified several promising candidates, but were too cautious to claim a discovery. At the 1987 meeting of the American Astronomical Society in Vancouver, Campbell announced their preliminary results. The press reported happily yet another discovery of an exoplanet, but the astronomers regarded even Walker and Campbell's cautious interpretation of the data with large skepticism. In his article “Lost world: How Canada missed its moment of glory,” Jacob Berkowitz describes the reaction of Walker and Campbell's colleagues:

“[Campbell]'s professional colleagues weren't as impressed [as the press]. One astronomer told The New York Times he wouldn't call anything a planet until he could walk on it. No one even attempted to confirm the results.”

Walker's gifted postdoc Bruce Campbell suffered most from the slow-going project that lacked appreciation and had difficulties getting continuing funding. In 1991, after more than a decade of data taking, they still had no discovery to show up with. Campbell meanwhile had reached age 42, and was still sitting on a position that was untenured, was not even tenure-track. Campbell's frustration built up to the point where he quit his job. When he left, he erased all the analyzed data in his university account. Luckily, his (both tenured) collaborators Walker and Yang could recover the data. Campbell made a radical career change and became a personal tax consultant.

But in late 1991, Walker and Yang were finally almost certain to have found sufficient evidence of an exoplanet around the star gamma Cephei, whose spectrum showed a consistent 2.5 year wobble. In a fateful coincidence, when Walker just thought they had pinned it down, one of his colleagues, Jaymie Matthews, came by his office, looked at the data and pointed out that the wobble in the data coincided with what appeared to be periods of heightened activity on the star's surface. Walker looked at the data with new eyes and, mistakenly, believed that they had been watching all the time an oscillating star rather than a periodic motion of the star's position.

Briefly after that, in early 1992, Nature reported the first confirmed discovery of an exoplanet by Wolszczan and Frail, based in the USA. Yet, the planet they found orbits a millisecond pulsar (probably a neutron star), so for many the discovery doesn't score highly because the star's collapse would have wiped out all life in that planetary system long ago.

In 1995 then, astronomers Mayor and Queloz of the University of Geneva announced the first definitive observational evidence for an exoplanet orbiting a normal star. The planet has an orbital period of a few days only, no decade long recording was necessary.

It wasn't until 2003 that the planet that Walker, Campbell and Yang had been after was finally confirmed.

There are three messages to take away from this story.

First, Berkowitz in his article points out that Canada failed to have faith in Walker and Campbell's research at the time when just a little more support would have made them first to discover an exoplanet. Funding for long-term projects is difficult to obtain and it's even more difficult if the project doesn't produce results before it's really done. That can be an unfortunate hurdle for discoveries.

Second, it is in hindsight difficult to understand why Walker and Campbell's colleagues were so unsupportive. Nobody ever really doubted that exoplanets exist, and with the precision of measurements in astronomy steadily increasing, sooner or later somebody would be able to find statistically significant evidence. It seems that a few initial false claims had a very unfortunate backlash that did exceed the reasonable.

Third, in the forest of complaints about lacking funding for basic research, especially for long-term projects, every tree is a personal tragedy.

Saturday, April 14, 2012

Book review: “How to Teach Relativity to Your Dog” by Chad Orzel

How to Teach Relativity to Your Dog
By Chad Orzel
Basic Books (February 28, 2012)

Let me start with three disclaimers: First, I didn’t buy the book, I got a free copy from the editor. Second, this is the second of Chad Orzel’s dog physics books and I didn’t read the first. Third, I’m not a dog person.

Chad Orzel from Uncertain Principles is a professor for physics at Union College and the best known fact about him is that he talks to his dog, Emmy. Emmy is the type of dog large enough to sniff your genitals without clawing into your thighs, which I think counts in her favor.

That Chad talks to his dog is of course not the interesting part. I mean, I talk to my plants, but who cares? (How to teach hydrodynamics to your ficus.) But Chad imagines his dog talks back, and so the book contains conversations between Emmy and Chad about physics.

In this book, Chad covers the most important aspects of special and general relativity: time dilatation and length contraction, space-time diagrams, relativistic four-momentum, the equivalence principle, space-time curvature, the expansion of the universe and big bang theory. Emmy and Chad however go beyond that by introducing the reader also to the essentials of black holes, high energy particle collisions, the standard model of particle physics and Feynman diagrams. They even add a few words on grand unification and quantum gravity.

The physics explanations are very well done, and there are many references to recent observations and experiments, so the reader is not left with the impression that all this is last century’s stuff. The book contains many helpful figures and even a few equations. It also comes with a glossary and a guide to further reading.

Emmy’s role in the book is to engage Chad in a conversation. These dialogues are very well suited to introduce unfamiliar subjects because they offer a natural way to ask and answer questions, and Chad uses them masterfully. Besides Emmy the dog, the reader also meets Nero the cat and there are a lot of squirrels involved too. The book is written very well, in unique do..., oops, Orzel-style, with a light sense of humor.

It is difficult for me to judge this book. I must have read dozens of popular science introductions to special and general relativity, but most of them 20 years ago. Chad explains very well, but then all the dog stuff takes up a lot of space (the book has 300 pages) and if you are, like me, not really into dogs, the novelty wears off pretty fast and what’s left are lots of squirrels.

I did however learn something from this book, for example that dogs eat cheese, which was news to me. I also I learned that Emmy is partly German shepherd and thus knows the word “Gedankenexperiment,” though Stefan complains that she doesn’t know the difference between genitive and dative.

In summary, Chad Orzel’s book “How to Teach Relativity to Your Dog” is a flawless popular science book that gets across a lot of physics in an entertaining way. If you always wanted to know what special and general relativity is all about and why it matters, this is a good starting point. I’d give this book 5 out of 5 tail wags.

Thursday, April 12, 2012

Some physics-themed ngram trends

I've been playing again with Google ngram, which shows the frequency by which words appear in books that are in the Google database, normalized to the number of books. Here are some keywords from physics that I tried which I found quite interesting.

In the first graph below you see "black hole" in blue which peaks around 2002, "big bang" in red which peaks around 2000, "quantization" in green which peaks to my puzzlement around 1995, and "dark matter" in yellow which might peak or plateau around 2000. Data is shown from 1920 to 2008. Click to enlarge.



In the second graph below you see the keywords "multiverse" in blue, which increases since about 1995 but interestingly seems to have been around much before that, "grand unification" in yellow which peaks in the mid 80s and is in decline since, "theory of everything" in green which plateaus around 2000, and "dark energy" in red which appears in the late 90s and is still sharply increasing. Data is shown from 1960 to 2008. Click to enlarge.



This third figure shows "supersymmetry" in blue which peaks around 1985 and 2001, "quantum gravity" in red which might or might not have plateaued, and "string theory" in green which seems to have decoupled from supersymmetry in early 2002 and avoided to drop. Data is shown from 1970 to 2008.



A graph that got so many more hits it wasn't useful to plot it with the others: "emergence" which peaked in the late 90s. Data is shown from 1900 to 2008.

More topics of the past: "cosmic rays" in blue which was hot in the 1960s, "quarks" in green which peaks in the mid 90s, and "neutrinos" in red peak around 1990. Data is shown from 1920 to 2008.

Even quantum computing seems to have maxed (data is shown from 1985 to 2008).

So, well, then what's hot these days? See below "cold atoms" in blue, "quantum criticality" in red and "qbit" in green. Data is shown from 1970 to 2008.

So, condensed matter and cosmology seem to be the wave of the future, while particle physics is in the decline and quantum gravity doesn't really know where to go. Feel free to leave your interpretation in the comments!

Tuesday, April 10, 2012

Be careful what you wish for

Michael Nielsen in his book “Reinventing Discovery” relates the following anecdote from the history of science.

In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram
    smaismrmilmepoetaleumibunenugttauiras

This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.”

Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory.

Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight.

There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality.

Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average.

The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss.

It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation.

However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials.

Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads.

And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition.

And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased.

Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because
“[D]ata show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant”

As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong.

And despite that, not to forget the hopes and dreams.

Mars btw has to our best current knowledge indeed two moons.

Sunday, April 08, 2012

Happy Easter!

Stefan honors the Easter tradition by coloring eggs every year. The equipment for this procedure is stored in a cardboard shoe-box labeled "Ostern" (Easter). The shoe-box dates back to the 1950s and once contained a pair of shoes produced according to the newest orthopedic research.

I had never paid much attention to the shoe-box but as Stefan pointed out to me this year, back then the perfect fit was sought after by x-raying the foot inside the shoe. The lid of the box contains an advertisement for this procedure which was apparently quite common for a while.



Click to enlarge. Well, they don't xray your feet in the shoe stores anymore, but Easter still requires coloring the eggs. And here they are:



Happy Easter everybody!

Friday, April 06, 2012

Book Review: "The Quest for the Cure" by B.R. Stockwell

The Quest for the Cure: The Science and Stories Behind the Next Generation of Medicines
By Brent R. Stockwell
Columbia University Press (June 1, 2011)

As a particle physicist, I am always amazed when I read about recent advances in biochemistry. For what I am concerned, the human body is made of ups and downs and electrons, kept together by photons and gluons - and that's pretty much it. But in biochemistry, they have all these educated sounding words. They have enzymes and aminoacids, they have proteases, peptides and kineases. They have a lot of proteins, and molecules with fancy names used to drug them. And these things do stuff. Like break up and fold and bind together. All these fancy sounding things and their interactions is what makes your body work; they decide over your health and your demise.

With all that foreign terminology however, I've found it difficult to impossible to read any paper on the topic. In most cases, I don't even understand the title. If I make an effort, I have to look up every second word. I do just fine with the popular science accounts, but these always leave me wondering just how do they know this molecule does this and how do they know this protein breaks there, fits there, and that causes cancer and that blocks some cell-function? What are the techniques they use and how do they work?

When I came across Stockwell's book "The Quest for the Cure" I thought it would help me solve some of these mysteries. Stockwell himself is a professor for biology and chemistry at Columbia university. He's a guy with many well-cited papers. He knows words like oligonucleotides and is happy to tell you how to pronounce them: oh-lig-oh-NOOK-lee-oh-tide. Phosphodiesterase: FOS-foh-dai-ESS-ter-ays. Nicotinonitrile: NIH-koh-tin-oh-NIH-trayl. Erythropoitin: eh-REETH-roh-POIY-oh-ten. As a non-native speaker I want to complain that this pronunciation help isn't of much use for a non-phonetic language; I can think of at least three ways to pronounce the syllable "lig." But then that's not what I bought the book for anyway.

The starting point of "The Quest for the Cure" is a graph showing the drop in drug approvals since 1995. Stockwell sets out to first explain what is the origin of this trend and then what can be done about it. In a nutshell, the issue is that many diseases are caused by proteins which are today considered "undruggable" which means they are folded in a way that small molecules, that are suitable for creating drugs, can't bind to the proteins' surfaces. Unfortunately, it's only a small number of proteins that can be targeted by presently known drugs:
"Here is the surprising fact: All of the 20,000 or so drug products that ever have been approved by the U.S. Food and Drug Administration interact with just 2% of the proteins found in human cells."
And fewer than 15% are considered druggable at all.

Stockwell covers a lot of ground in his book, from the early days of genetics and chemistry to today's frontier of research. The first part of the book, in which he lays out the problem of the undruggable proteins, is very accessible and well-written. Evidently, a lot of thought went into it. It comes with stories of researchers and patients who were treated with new drugs, and how our understanding of diseases has improved. In the first chapters, every word is meticulously explained or technical terms are avoided to the level that "taken orally" has been replaced by "taken by mouth."

Unfortunately, the style deteriorates somewhat thereafter. To give you an impression, it starts more reading like this
"Although sorafenib was discovered and developed as an inhibitor of RAF, because of the similarity of many kinases, it also inhibits several other kinases, including the patelet-derived growth factor, the vascular endothelia growth factor (VEGF) receptors 2 and 3, and the c-KIT receptor."

Now the book contains a glossary, but it's incomplete (eg it neither contains VEGF nor c-KIT). With the large number of technical vocabulary, at some point it doesn't matter anymore if a word was introduced, because if it's not something you deal with every day it's difficult to keep in mind the names of all sorts of drugs and molecules. It gets worse if you put down the book for a day or two. This doesn't contribute to the readability of the book and is somewhat annoying if you realize that much of the terminology is never used again and one doesn't really know why it was necessary to use to begin with.

The second part of the book deals with the possibilities to overcome the problem of the undruggable molecules. In that part of the book, the stories of researchers curing patients are replaced with stories of the pharmaceutical industry, the start-up of companies and the ups and downs of their stock price.

Stockwell's explanations left me wanting in exactly the points that I would have been interested in. He writes for example a few pages about nuclear magnetic resonance and that it's routinely used to obtain high resolution 3-d pictures of small proteins. One does not however learn how this is actually done, other than that it requires "complicated magnetic manipulations" and "extremely sophisticated NMR methods." He spends a paragraph and an image on light-directed synthesis of peptides that is vague at best, and one learns that peptides can be "stapled" together, which improves their stability, yet one has no clue how this is done.

Now the book is extremely well referenced, and I could probably go and read the respective papers in Science. But then I would have hoped that Stockwell's book saves me exactly this effort.

On the upside, Stockwell does an amazingly good job communicating the relevance of basic research and the scientific method, and in my opinion this makes up for the above shortcomings. He tells stories of unexpected breakthroughs that came about by little more than coincidence, he writes about the relevance of negative results and control experiments, and how scientific research works:
"There is a popular notion about new ideas in science springing forth from a great mind fully formed in a dazzling eureka moment. In my experience this is not accurate. There are certainly sudden insights and ideas that apear to you from time to time. Many times, of course, a little further thought makes you realize it is really an absolutely terrible idea... But even when you have an exciting new idea, it begins as a raw, unprocessed idea. Some digging around in the literature will allow you to see what has been done before, and whether this idea is novel and likely to work. If the idea survives this stage, it is still full of problems and flaws, in both the content and the style of presenting it. However, the real processing comes from discussing the idea, informally at first... Then, as it is presented in seminars, each audience gives a series of comments, suggestions, and questions that help mold the idea into a better, sharper, and more robust proposal. Finally, there is the ultimate process of submission for publication, review and revision, and finally acceptance... The scientific process is a social process, where you refine your ideas through repeated discussions and presentations."

He also writes in a moderate dose about his own research and experience with the pharmaceutical industry.

The proposals that Stockwell has how to deal with the undruggable proteins have a solid basis in today's research. He isn't offering dreams or miracle cures, but points out hopeful recent developments, for example how it might be possible to use larger molecules. The problem with large molecules is that they tend to be less stable and don't enter cells readily, but he quotes research that shows possibilities to overcome this problem. He also explains the concept of a "privileged structure," structures that have been found with slight alterations to bind to several proteins. Using such privileged structures might allow one to sort through a vast parameter space of possible molecules with a higher success rate. He also talks about using naturally occurring structures and the difficulties with that. He ends his book by emphasizing the need for more research on this important problem of the undruggable proteins.

In summary: "The Quest for the Cure" is a well-written book, but it contains too many technical expressions, and in many places scientific explanations are vague or lacking. It comes with some figures which are very helpful, but there could have been more. You don't need to read the blurb to figure out that the author isn't a science writer but a researcher. I guess he's done his best, but I also think his editor should have dramatically sorted out the vocabulary or at least have insisted on a more complete glossary. Stockwell makes up for this overdose of biochemistry lingo with communicating very well the relevance of basic research and the power of the scientific method.

I'd give this book four out of five stars because I appreciate Stockwell has taken the time to write it to begin with.

Wednesday, April 04, 2012

On the importance of being wrong

Some years ago, I attended a seminar by a young postdoc who spoke about an extension of the standard model of particle physics. Known as “physics beyond the standard model,” this is a research area where theory is presently way ahead of experiment. In the hope to hit something by shooting in the dark, theorists add stuff that we haven’t seen to the stuff we know, and then explain why we haven’t seen the additional stuff – but might see it with some experiment which is about to deliver result. Ie, the theorists tell experimentalists where to look.

Due to the lack of observational evidence, the main guide in this research area is mathematical consistency combined with intuition. This type of research is absolutely necessary to make progress in the present situation, but it’s also very risky. Most of the models considered today will turn out to be wrong.

The content of the seminar wasn’t very memorable. The reason I still recall it is that, after the last slide had flashed by, somebody asked what the motivation is to consider this extension of the standard model, to which the speaker replied “There is none, except that it can be done.”

This is a remarkably honest answer, especially since it came from a young researcher who had still ahead of him the torturous road to tenure.

You don’t have to look far in the blogosphere or on Amazon to find unsolicited advice for researchers for how to sell themselves. There now exist coaching services for scientists, and some people make money writing books about “Marketing for Scientists.” None of them recommends that when you’ve come to the conclusion that a theory you looked at wasn’t as interesting as you might have thought, you go and actually say that. Heaven forbid: You’re supposed to be excited about the interesting results. You were right all along that the result would be important. And there are lots of motivations why this is the one and only right thing to do. You have won great insights in your research that are relevant for the future of mankind, at least, if not for all mankinds in all multiverses.

It’s advice well meant. It’s advice for how to reach your presumed personal goal of landing a permanent position in academia, taking into account the present mindset of your older peers. It is not advice for how to best benefit scientific research in the long run. In fact, unfortunately, the both goals can be in conflict.

Of course any researcher should in the first line work on something interesting, well motivated, and something that will deliver exciting results! But most often it doesn’t work as you wish it should. To help move science forward, the conclusion that the road you’ve been on doesn’t seem too promising should be published to prevent others from following you into a dead end, or at least telling them where the walls are. Say it, and start something new. It’s also important for your personal development. If you advertise your unexciting research as the greatest thing ever, you might eventually come to believe it and waste your whole life on it.

The reason nobody advises you to say your research project (which might not even have been your own choice) is unexciting is that it’s difficult if not impossible to publish a theoretical paper that examines an approach just to come to the conclusion that it’s not a particularly convincing description of nature. The problem with publishing negative results might be familiar to you from medicine, but it exists in theoretical physics as well. Even if you get it published, and even if it’s useful in saving others the time and work that you have invested, it will not create a research area and it’s unlikely to become well-cited. If that’s all you think matters, for what your career is concerned it would be a waste of your time indeed.

So, they are arguably right with their career advice. But as a scientist your task is to advance our understanding of nature, even if that means concluding you’ve wasted your time – and telling others about it. If you make everybody believe in the excitement of an implausible model, you risk getting stuck on a topic you don’t believe in. And, if you’re really successful, you get others stuck on it too. Congratulations.

This unexciting seminar speaker some years ago, and my own yawn, made me realize that we don’t value enough those who say: “I tried this and it was a mistake. I thought it was exciting, but I was wrong.” Basic research is a gamble. Failure is normal and being wrong is important.

Monday, April 02, 2012

Interna

In the past month, Lara and Gloria have learned to learn. They try to copy and repeat everything we do. Lara surprised me by grabbing a brush and pulling it through her hair and Gloria, still short on hair, tries to put on her shoes. They haven't yet learned to eat with a spoon, but they've tried to feed us.

They both understand simple sentences. If I ask where the second shoe is, they'll go and get it. If I tell them lunch is ready, they'll both come running and try to push the high chairs towards the table. If we tell them we'll go for a walk, they run to the door. If we do as much as mention cookies, they'll point at the bag and insist on having one.

Lara is still the more reserved one of the two. Faced with something new, she'll first watch from a distance. Gloria has no such hesitations. Last week, I childproofed the balcony. Lara, who was up first, saw the open door and froze. She stood motionless, staring at the balcony for a full 10 minutes. Then Gloria woke up, came running while yelling "Da,da" - and stumbled over the door sill, landing on her belly. Lara then followed her, very carefully.

Now that spring is coming and the girls are walking well, we've been to the playground several times. Initially Lara and Gloria just sat there, staring at the other children. But meanwhile they have both made some contacts with other children, though not without looking at me every other minute to see if I approve. Gloria, as you can guess, is the more social one. She'll walk around with her big red bucket and offer it to others, smiling brightly. She's 15 months and has at least 3 admirers already, all older boys who give her toys, help her to walk, or even carry her around. (The boys too look at me every other minute to see if I approve.) Lara and I, we watch our little social butterfly, and build sand castles.

From my perspective, the playground is a new arena too. Weekdays, the adult population is exclusively female and comes in two layers of generations, either the mothers or the grandmothers. They talk about their children and pretty much nothing but their children, unless you want to count pregnancies separately. After some initial mistakes, I now bring a book, paper, or a magazine with me to hide behind.

Another piece of news from the past month is that I finally finished the review on the minimal length in quantum gravity that I've been working on since last year. It's now on the arXiv. The first 10 pages should be understandable for pretty much everybody, and the first half should be accessible also for undergraduates. So if you were wondering what I'm doing these days besides running after my daughters, have a look at my review.

Sunday, April 01, 2012

Computer Scientists develop Software for Virtual Member of Congress

A group of computer scientists from Rutgers university have published a software intended for crowd-sourcing the ideal candidate. "We were asking ourselves: Why do we waste so much time with candidates who disagree with themselves, aren't able to recall their party's program, and whose intellectual output is inferior even to Shit Siri Says?" recalls Arthur McTrevor, who lead the project, "Today, we have software that can perform better."

McTrevor and his colleagues then started coding what they refer to as the "unopinionated artifical intelligence" of the virtual representative, the main information processing unit. The unopinionated intelligence is a virtual skeleton which comes alive by crowd-sourcing opinions from a selected group of people, for example party members. Members feed the software with opinions, which are then aggregated and reformulated to minimize objectionable statements. The result: The perfect candidate.

The virtual candidate also has a sophisticated speech assembly program, a pleasant looking face, and a fabricated private life. Visual and audial appearance can be customized. The virtual candidate has a complete and infallible command of the constitution, all published statistical data, and can reproduce quotations from memorable speeches and influential books in the blink of an eye. "80 microseconds, actually," said McTrevor. The software moreover automatically creates and feeds its own Facebook account and twitter feed.

The group from Rutgers tested the virtual representative in a trial run whose success is reported in a recent issue of Nature. In their publication, the authors point out that the virtual representative is not a referendum that aggregates the opinions of the general electorate. Rather, it serves a selected group to find and focus their identity, which can then be presented for election.

In an email conversation, McTrevor was quick to point out that the virtual candidate is made in USA, and its patent dated 2012. The candidate will be thus be eligible to run for congress at the "age" of 25, in 2037.