Wednesday, April 28, 2010

It's the stupidity, stupid

Something else that I read on our long road trip from Frankfurt to Stockholm is this nice essay by Martin Schwarz on
Martin Schwarz is Professor of Microbiology and Biomedical Engineering at the University of Virginia.

“Science makes me feel stupid,” he writes. But instead of avoiding it, he “actively seek[s] out new opportunities to feel stupid.” In a nutshell, his essay says that if you're doing research and you don't feel stupid every now and then, you're doing something wrong. You have to keep asking till you ask what nobody has asked before and then you're on you're own. Feeling stupid. If you stick to questions whose answers are known, you might feel smart, but you won't contribute to knowledge discovery.

I basically agree with Schwarz and I welcome that he is getting his point across so well. It is perfectly okay if science makes you feel stupid, whether you're a professional scientist or not. Just don't stop there. I however find it somewhat misleading that Schwarz calls it stupidity if one doesn't know an answer since it mixes up knowledge with intelligence. But it makes for a more catchy title.

Unfortunately, it is badly communicated in school what actual research is like. School science is still mostly a presentation of knowledge that's at least a century old. The answers are all known and your task is to pipe them into your head. But that's a bad preparation for research, and it doesn't get across the wonder and fascination of going where nobody has gone before and thinking what nobody has thought before. I vividly recall that in my first semesters at the university the most exciting moments were when a professor or a tutor (usually a postdoc) mentioned an unsolved problem, an open questions. There it was, the frontier of knowledge, and I wanted to go and poke around in the dark.

Martin Schwarz recalls his experience with his first own research project:
“The crucial lesson was that the scope of things I didn't know wasn't merely vast; it was, for all practical purposes, infinite. That realization, instead of being discouraging, was liberating. If our ignorance is infinite, the only possible course of action is to muddle through as best we can.”

Of course one never really knows whether there isn't somebody who knows an answer to your question. And if you've spent weeks only to figure out that indeed for other people on the planet the answer had been well-known, you feel really stupid. But then it also happens occasionally that the answer that everybody thought was well-known actually was wrong... As my teacher used to say: the only stupid question is the question not asked.

Partly related, Eric-Wubbo Lameijer over at Nature Network has an excellent series of posts on the IQ, what it measures and what not, Should you be smart to become a scientist? I, II and III. See also my earlier post How important is talent?.

Tuesday, April 27, 2010

Oh-oo-oh, you think you're special

I once joked that when my blood pressure is too low I go and buy the Time Magazine. It works better than medication and with less side-effects. Now that I live in Sweden however, the Time Magazine doesn't stare at me on the register in every supermarket. Since Stefan cares about my well-being, he now has a subscription, special offer, EUR 20 for one year. And thus, on our road trip from Frankfurt back to Stockholm I browsed through one of the recent issues. So I can now offer you a particularly nice example for American arrogance self-confidence.

In the March 11 issue, Andres Martinez suggests one of "the most important trends of the new decade" in his article
The "important trend" is, in a nutshell, that the world will become a Global America:
"The fact that the rest of the world is becoming more like us — in ways good and bad — underscores the extent to which we are living in an American century, even as it erodes, by definition, the notion of American exceptionalism."

He derives this development towards The Global America from a thought experiment:
"If you bring together teenagers from Nigeria, Sweden, South Korea and Argentina — to pick a random foursome — what binds these kids together in some kind of community is ..."

Let's see, what could it be that binds these teenagers together? They like Pizza? They have skin problems? They think their parents are embarrassing? No, it's actually...
"... American culture: the music, the Hollywood fare, the electronic games, Google, American consumer brands."

Well, to be honest I don't know very much about the Nigerian music scene, but I'm not sure what this is supposed to say except that some countries can't really afford to invest millions in "producing" stars. So let's for a better comparison have a look at this week's German top 10 single charts. The singer's nationalities are in order from 1 to 10: German, German, Belgian, German, American, British, French, American, Virgin Island (I guess that's still British?), American. 3 out of 10 is not bad, but maybe Martinez should take a trip to Germany or France and turn on a radio to get a realistic perspective on the international music scene.

Sure, I am willing to admit that he is right to a large extend, American music and movies are wide spread. And yes, we all use Google and it's an American company. But that's the past. How is that predictive for what the next century will look like?

Well, Martinez isn't done with his insightful analysis. He further lets the reader know:
"As anyone raised in a different country will tell you, two of the strongest impressions someone has on arriving in the U.S. are 1) what a great country this seems to be, and 2) what a mess it must be, judging by the tenor of news coverage and political discourse."

rotfl, the strongest impression that I had after moving to the USA was 1) what a mess this country is and 2) how ridiculous it is that the news coverage desperately tries to protect the American "exceptionalism." Martinez' article is an excellent example of 2).

As you can guess I know a lot of people who are European (mostly German) who spent a postdoc in the USA. They generally share my impression. Let's face it: American food either sucks or is overpriced. American highways are countrywide in a pity state. Americans seem to have no clue how to do a decent plumbing work or how to achieve a functioning canalization. They will instead always tell you there's something specifically weird with their weather. For example, it might rain. Or the sun might shine. Windows in America either don't open or, if you managed to open them, they won't close. Since the windows and doors don't really close, naturally there always has to be a heating or air conditioning running. And let's not even start with issues like education, poverty or health insurance. I think you get the point. The only thing that's really exceptional about Americans is how they still manage to believe they are exceptional.

But you know what? This lifestyle based on low quality standards and constant maintenance has one big advantage: it increases the GDP that Martinez is so proud about. Yes, that's right, every time you wreck a wheel in a pothole, every time your child gets sick, every time you call the plumber, every time you call customer service, every time something breaks and has to be fixed, every time something can't be fixed and has to be replaced, the GDP goes up.

Luckily, unlike what Martinez writes, Europe is not turning into a second America. We actually have working public transportation over here. According to the World Economic Forum the most tech-friendly country is Sweden. Past 2001 the "Top Intelligent Community" has not been in the USA (and in 2001 it was NYC, hardly representative for the nation). If you go to the dentist next time, look at the label of the instruments because chances are the equipment is made in Germany. In the biggest part of Europe same-sex-partnerships are legal. In Germany, prostitution is legal, so is abortion. How long will it take for Americans to crawl out of the 20th century? And Shania Twain, who wrote the line that is the title to this blogpost, well, she's Canadian.

Okay. Now that I'm done with Martinez' ridiculous essay let me get this straight. I'm not a nationalist. There are indeed many things about America that I like very much. Ahead of all, there's the entrepreneurial spirit which is taken on in the, infinitely better, article "In Defense of Failure" by Megan McArdle in the same issue of Time Magazine. If you have a start-up idea, if you want to try something new, if you want to be crazy: America is the place to go, not Europe. There are many things Europe could learn from America, ahead of all maybe how to establish a proper "union," and there are things America could learn from Europe, ahead of all maybe how to build proper highways. The same probably holds for other parts of the world. We can all learn from each other, and this exchange is not a one-way process.

I don't think we'll globally converge on the same values and tastes any time soon, and I don't think this would be desirable either. There are some issues we have to converge on in an increasingly interconnected planet, and we have to work on that. But it is extremely unlikely that the outcome will be The Global America.
    "You're Tarzan!
    Captain Kirk maybe.
    John Wayne.
    Whatever!
    That don't impress me much!"

Sunday, April 25, 2010

A culture for debate?

Spiegel ONLINE has a very interesting and well written article that tells a story about controversy and the limits of collective intelligence at Wikipedia:

The article contains some interesting facts, some of which you might have heard before. It is specifically about the German Wikipedia site, but I doubt this makes qualitatively much of a difference. While the site is frequently consulted, it's only a small fraction of people, of the order of some promille, that edit articles. Most of the registered users seem to never use their account. The people who contribute frequently seem to be driven to a large extend by the social ranking in that community. This is not very different to other online forums. The number of rules and regulations for editing Wikipedia articles has been steadily increasing. Spiegel online interviewed Henriette Fiebig who works at the German headquarters of Wikipedia. She says
    "Now you need three days just to read all the rules."

Even more interesting is what Elisabeth Bauer, who has played a leading role in the German Wikipedia community from the beginning on, says about these rules:
    "Discussions in those days didn't last long, because there was hardly anyone there to participate. We often just established rules quickly, without giving them a lot of thought. It seems strange to see how some people today are beating themselves up over things that you yourself simply wrote down at some point."

A development that I've seen happen in completely other circumstances as well...

The Spiegel ONLINE article further focuses as example on one particular debate that went on "backstage" in the discussion pages. It features a completely irrelevant detail, a guy who can't admit to be neither wrong nor compromise on that irrelevant detail, and a women who gets angered by that guy and tries to drown him in facts. The detail in this case is the question whether or not the Danube tower is a TV tower. For what I am concerned, as long as you haven't defined what a TV tower is, you can't answer the question, so first thing you should do is to clarify what the issue is about. And arguments about definitions are moot anyway. A definition is never wrong, it's just more or less useful.

But, as you can guess, a guy with a big ego who can't compromise can waste other people's time and in the end often wins just because everybody in their right minds realizes they are wasting their time.

It is a sad story and one that, unfortunately, is very typical for online conversations. It makes me wonder, once again, if not a wide-spread education in how to lead fruitful and constructive arguments would be helpful to alleviate this issue.

If you found that status report from the inner workings of online-communities depressing, I recommend you read this heart-warming NYT story:

Friday, April 23, 2010

Occupational Risk

Yesterday, I went to see my bank consultant to get advice on my pension perspective, or rather absence thereof. Side-effect of constant moving and short-term contracts, I am presently eligible for a pension of 3 cents per month from the State of Arizona, which is a nice gesture but doesn't even pay the parmesan on the spaghetti. While the banker was at it, he also tried to talk me into a life-insurance.

After entering age, gender and marital status to calculate the rate, he asked what I do for a living. I'm a physicist, I told him. It then appeared a submenu with a refined job description, astrophysicist, atomic and molecular physics, etc. Lacking high energy physics, I opted for nuclear physics. Hey, is what it says on my door sign: "Assistant Professor, Sabine Hossenfelder, High Energy and Nuclear Physics." So the bankman enters nuclear physicist, and on the screen appears a warning that the occupational risk has to be assessed by a specialist. Well, I say, the only health risk that my occupation brings is that I accidentally poke out my eye with a pencil. And the highest occupational risk of a high energy physicist is probably ending up as a banker. Who wrote that software?

I finally opted against the life-insurance, but I'm now signed up for a pension plan. If I pay in for the next three decades I might then actually be able to afford the parmesan. Unless there's some major disaster, like Central Europe being 1 m below sea level by 2041 or so, I actually have a parmesan guarantee. I feel very grown-up now. At some point I'll have to figure out what to do about the spaghetti. And maybe I should make sure there's no fuel rods in my desk drawer, one never knows.

Wednesday, April 21, 2010

It comes soon enough

    I never think of the future - it comes soon enough.
~Albert Einstein

I think of the future frequently - and more often than not I think it could come sooner. But sometimes I am stunned when I read things I've been talking about actually become reality.

For example, I was thinking around the time the Internet took off it would be great if customers could use this globally connected information pool to obtain additional product details upon demand. Such applications are now, though not yet wide-spread, perfectly doable by scanning a barcode with a phone and downloading the information via wireless internet connection.

RedLaser from Jeffrey Powers on Vimeo.


Granted, the information I was thinking of was more about the production details than customer reviews since lack of information skews consumers interest. You see, I wanted to improve the world. Basically I was thinking that to decide whether a product is worth the investment, the customer would need to know what fraction of the price went into the production, advertisement, marketing, and what is profit. For example, if a product has a higher price but claims to be more environmentally friendly, I would actually like to know how much of the price is due to that friendliness. I would also like to know if a product is less expensive than others because the company pays their workers less or whether they invest less in clogging my mailbox with spam.

Another development that has been on the way for a while is identification or possibly payment by fingerprint. While I totally appreciate not having to recall pins or carrying a stack of cards, using fingerprints as identification method strikes me as one of the dumber ideas I've come across. After all, you leave your fingerprints constantly and everywhere. That identification method is not yet used sufficiently often, but I have no doubt once it spreads somebody would come up with a clever method to fake an index finger. Then all he has to do is go into a public restroom and grab a few. The fingerprint payment idea made headline a few years ago, but it doesn't seem to have taken off which I'm not too surprised about.

Something else seems to me will happen in the next decade or so is that the boundary between virtual and real reality will become increasingly fuzzy and instead we'll have what could be called a multi-layered reality. Imagine you go into a cafe and your handheld device will tell you not only where you are but also the history of the place and people who've been there before and what they thought about the place. It will also tell you who are the people in the room, if they have signed up for such a service. They might have a profile online containing information they want to share. Maybe they're single and looking for somebody. Maybe they're about to go on a vacation to Cuba and are interested talking to somebody who has been there before. Maybe they're a physicist and would rather be left alone. There might be the shop owner who has his CV online and the waitress is writing a blog that you can read while you're waiting for a muffin. I can easily imagine if sufficiently many people would be using this, it also provides a basis for a new type of semi-virtual reality games.

Something entirely different that I've been thinking about was that before it becomes possible to grow human organs in the lab for transplantation purposes, something that is slowly coming closer to reality, it would be possible to grow meat suitable for consumption without having to bother with the whole animal. Last year I read that scientists are talking about genetically engineering animals to not feel pain. I don't think that's likely to spread, it's far to messy. More likely, a century from now, we'll have factories with organ bags that resemble nothing like animals at all.

And then there's all those folks who want to become immortal by uploading themselves to a computer. Well, I spent the last evening trying to reanimate my mother's crashed-down PC. The brain-upload story always gives me a good laugh. You show me a computer that is as complex as and runs as stable as the human body for 80+ years. I think people who want to upload themselves to a computer severely underestimate how amazing it is that our bodies function so well and what an incredible achievement of Nature this orchestrated complexity is. I don't think it's entirely impossible that one day we'll replace the human body with some artificial device, but it's much more in the future than people want to believe. I think it is more likely that in the end we'll have some bio-tech cross-solution.

In any case, something that seems to me much closer in the future is to overcome the isolation of the human brain. To me, the largest tragedy of life is that we're all alone and fundamentally so. However, in difference to uploading your brain to a computer, connecting it to another one doesn't seem to me that far fetched at all. Neuroscientists have made steady progress on measuring and deciphering brain activity. It is also known that the human brain is incredibly good at learning how to work with new input, and we have the ability of cognitively making extensions of our body our own, an ability known as "extended mind." The obvious step to take seems to me not trying to get a computer to decipher somebody's brain activity, but to take the output and connect it as input to somebody else. If that technique becomes doable and is successful, it will dramatically change our lives.

I just hope it comes soon enough so I have a chance to see how it works.

Monday, April 19, 2010

Hello from Germany

As previously mentioned, I am presently on a 1,500 km road-trip from Stockholm to Frankfurt. Skies are clear and blue, but the weather forecast says the wind in Central Europe will continue to blow South-East, directly from Iceland, for some more days. Stefan managed to get a train ticket to Copenhagen, where we met. Currently we are in Hamburg. Below is a photo from the ferry from Denmark to Germany. (If you look closely you can see the guy who took the photo in the reflection in the door.)


All along the way there are stranded travelers. Rental cars are impossible to get. Train tickets are in high demand. A few companies acted fast and are offering bus services between major cities. So far, people are taking it well-humored and calmly. After all, a volcano eruption is as close to higher power as one wants to get while alive. In Copenhagen, we were staying in an airport hotel which was peacefully quiet. In the lobby they had an information screen that usually lists departure times. It showed "cancelled" all the list down.

Meanwhile, after several days being grounded, the airlines are starting to grumble whether the flight ban is necessary since the scientific basis is basically lacking. There have hardly been any measurements taken, due to lacking equipment. Some planes seem to have been moved empty between airports, and no problems occurred. Especially short-distance flights do also often not reach traveling altitude. I read this morning that some airports in Europe have been reopened today.

And while I was at downloading the photo from my digicam, here's another photo. That's how living in the EU is like:


The above is an ironing instruction from my curtain (IKEA of course). The literal translation of the German instruction is actually not "iron on reverse" but "iron from left." I have no clue why, but if one carries a T-shirt inside-out, in German you'd say it's carried left. Maybe there's a political interpretation for that ;-) But if it makes little sense for a shirt, it makes even less sense for a curtain. Note: In Germany, the side facing the window is left.

While the language barriers in the EU are slowly fading with the younger generation all speaking English as second language, one would wish Europe could at least agree on one currency. But our pockets are presently filled with EUR, SEK and DKK. They can't even agree on whether it's Kronor or Kroner!

Saturday, April 17, 2010

Magnetic Monopoles in Spin Ice

Last summer it went through the news: magnetic monopoles had been observed! First time I read this headline I was pissed off that nobody had told me. Upon reading the article it turned out however that the discovery was not one of elementary magnetic monopoles, but instead magnetic monopoles in condensed matter systems. It's okay nobody told me that because I'm not exactly known for my close ties to the condensed matter community. I thought at the time it might be worth mentioning on this blog, but changed my mind at the prospect of explaining what a corner-touching tetrahedral lattice is. Now last month I heard a seminar by Steven Bramwell, one of the experimentalists who made it into the headlines and one of the authors of the Nature article reporting on the discovery. He offered a simplified two-dimensional picture that I found very illuminating and totally blog-suitable.

Coulomb's Needle

Before we start, let me point out that while elementary magnetic monopoles, when discovered, would mean a change to the fundamental laws of electrodynamics, the magnetic monopoles in condensed matter systems are perfectly compatible with usual electrodynamics. We all learn in school that there are no magnetic monopoles. But there are configurations that appear like magnetic monopoles. In fact, I learned in Bramwell's talk that Coulomb did not only come up with a potential for electric point charges, but also with one for magnetic point charges. The latter didn't survive it into modern textbooks though. What Coulomb did was to measure the magnetic field around the end of a very thin needle. You can nicely measure this field, except of course exactly inside the needle. We know today that the magnetic field lines going through the needle into its tip do exactly balance the outgoing ones that Coulomb measured. There are thus no sources for the magnetic field. Whether or not the needle is straight doesn't matter, it just has to be something that's essentially one-dimensional. That the field around the end-tip looks like the one for electric charges is simply a consequence of the geometry*.

In any case, you all know what happens if you take a magnet and break it apart. Instead of getting two opposite magnetic charges, you just get two smaller magnets, so that's not useful. The way you “make” a magnetic monopole is to instead create something akin the needle that Coulomb was measuring the field of, deforming the needle, and then hiding it in a haystack. In the end, the only thing you can sensibly talk about are the endpoints of the needles which stick out if you measure the magnetic field.

Spin Ice

The system that the experimentalists used for their measurements is called a “spin ice.” The spin ice, named for its resemblance to water ice, features the above mentioned corner-touching tetrahedral lattice with little magnets on the corners of the tetrahedron that arise from the magnetic moments of the atoms at these locations. This molecular arrangement is akin to the H2O ice. In ice, the larger oxygen atom sits in the middle of a tetrahedron, surrounded by 4 hydrogen atoms, two of which are nearby (the one "belonging" to the central oxygen atom), and two which are further away (belonging to neighboring oxygen atoms) and making up so-called "hydrogen bonds". If you draw an arrow towards the oxygen in the center when the hydrogen is close, and an arrow away when it's far, you have a corner-touching tetrahedral lattice with four arrows to or from the center of the tetrahedra (see picture).

Now think of a structure where instead of hydrogen atoms there are titanium atoms and rare-earth atoms such as holmium located at the corners of the tetrahedra. These metal atoms have magnetic moments which in this lattice configuration only can point either inwards or outwards of the tetrahedra, just as the arrows indicating the positions of hydrogen atoms in ice. That's the spin ice.

Magnetic Monopoles on a Sheet of Paper

Confused? We can vastly simplify that spin ice by looking at a 2-dimensional analogy. You can do it on a sheet of paper, as shown in the picture below. Use a pencil and have an eraser ready. The little arrows are the magnets. The lattice rule is that to each square, two arrows go in, and two go out, as with the spin ice. This is the preferred configuration.


[Click to enlarge]

Now let us create a defect in that lattice by switching one of the arrows. With this you now notice that, above the background of the lattice, there's a magnet sitting there. The one pole has three arrows in (red), the other one three arrows out (green). If you'd measure the magnetic field, you would find find these two poles.


[Click to enlarge]

Next step is to pull the ends of the magnet apart without creating further defects. The below picture shows a first step. If you draw it on a sheet of paper, you will easily see that you can pull the defects further apart, not necessarily in one straight line. The defects will be connected by a one-way oriented path of arrows along the path that you've pulled them apart. That's creating and deforming Coulomb's needle if you wish.

[Click to enlarge]

Final step is to add a few more such paired defects and and pulling them apart. Then sit back, look at your lattice and try to find Coulomb's needle, ie the connection between the oppositely charged defects. You'll notice the following. There is no unique connection between any two defects. They can be connected by many different lines, and they can no longer be considered paired either. The below picture shows an example with two defects and some of the possible connection lines. If you look closely, you will find more connections than I've drawn.


[Click to enlarge]

Thus, you have “hidden” the needle in the background lattice. The defects are no longer paired, but can move around independently as if they were, you get it, magnetic monopoles! You can then go and measure some of their properties, and these measurements confirming their existence is what made the headlines last summer.


* No, wait, it's actually an entropic force!

Friday, April 16, 2010

Unfriendly Skies

Unless you're deaf and blind and spent the last 24 hours in your closet, you probably heard that there's a volcano with an unpronounceable name in Iceland that's currently erupting. The volcanic ashes are drifting South-East over Scandinavia towards Central Europe. Due to the risk of engine failure, a significant part of the flight traffic in these areas has been entirely shut down.

For a multi-layered set of complex reasons I have to be in Frankfurt on Monday. I am actually officially on vacation next week, no really. This means now I'll be on the road the whole weekend. There's a blogpost on magnetic monopoles in the pipe, but it still needs some cleaning up. Either way, this is just to say, you might not hear from us much for a while.

Tuesday, April 13, 2010

What I am is what I am

When people ask me what I'm doing I tell them I'm a physicist. Then they ask me if I'm in experiment or theory. And I'll tell them I'm a phenomenologist. More often than not, the reply is "A what?" So here's what it means, or at least what it means to me.

The aim of physics is to find accurate descriptions of Nature. Over the centuries, this task has split up in several sub-domains. Since mathematics has proved reasonably effective to describe Nature, it is no surprise that a considerable amount of attention is dedicated to the mathematical formulation of our theories. That's mathematical physics and the one end of the spectrum.

At the very other end there's experimental physics: That's going out and bringing in real-world data with all the required analysis and errorbars. If I learned one thing in my experimental physics classes, it's to never forget those errorbars. The other thing I learned is that oscilloscopes are allergic to my presence.

In between mathematical and experimental physics, there's theoretical physics and phenomenology, with theory leaning towards the math, and phenomenology leaning towards experiment.

Theoretical physics is the construction of theories. (Depending on your philosophy you might insist it's a discovery rather than a construction, but I'm not a philosopher, so please excuse.) Theory inevitably brings a considerable amount of math but the focus is on finding a description of Nature. Once you start up with some theory, the requirement of mathematical consistency puts very strong constraints on what you can and can't do, and it can take substantial time and effort to figure out the details. It's thus no surprise the boundary between theoretical and mathematical physics is fuzzy. Historically, both have often proceeded hand in hand with each other.

In contrast to math though, it happens frequently that theorists work with assumptions that have not been proven to be true but there are physical arguments to believe they are true or, if not true, then there are at least reasons to believe that they work. Theoretical physicists thus have a freedom of intuition that mathematical physicists don't have. The result is that the mathematical physicist will typically tell the theoretical physicist he's sloppy, while the theoretical physicist will call the mathematical physicist a nitpicker.

The emphasis of phenomenology is on developing models either for already available experimental data, or for theories for which a connection to the data is so far missing. Phenomenological models are not meant to be a fundamental description of Nature. Their purpose is to connect the theory with the experiment, both ways. This typically happens by making simplifying assumptions or considering limits in which only some features of a theory are relevant. On the other hand, if you have a phenomenological model that describes the data very well, you can extract from it some of the features the underlying theory must have. Note however that the derivation is one-way: You can, in principle, derive a phenomenological model from the full theory, but not the other way 'round. The result is that the theorist will typically look down on the phenomenologist for lacking closeness to fundamental truths, while the phenomenologist will consider the theorist a dreamer.

Needless to say, there is no strict boundary between phenomenology and theory either. For example, you can very well have a phenomenological model that however makes predictions that are not observable, or not yet observable. So it doesn't provide much of a bridge to experiment. On the other hand, the more parameters you use in a model, the closer it gets to data-fitting and the less useful it is to extract something of fundamental value. But well, what would the world be without friction?


I originally studied mathematics and have a Bachelor's degree in math rather than physics. Today I'm a phenomenologist because I know I have a tendency to get lost in mathematics. It's a very wide world, the world of mathematics. There are many people who have the courage to search for fundamental truths following nothing but their intuition, but I prefer to have reality constraints. The underlying reason is that I believe mathematical consistency alone is not sufficient to arrive at the right theory.

Why I work on the phenomenology of quantum gravity in particular is that I believe quantum gravity is where our next major step forward will be and I think it will revolutionize our understanding of reality. Some of my more optimistic colleagues think observations are a decade away. I think it's more likely two or three decades. Occasionally, people who know what the Planck scale is laugh when I tell them I work on the phenomenology of quantum gravity. Well, if you'd have told them three decades ago that we'll one day be able to distinguish different inflation scenarios by high precision cosmology, they would have laughed as well. So, no, we're not there yet, but I'm confident it's in our future.

Saturday, April 10, 2010

Whodunit?

I know many examples where senior researchers were listed as authors of a paper even though they didn't contribute to the paper. In some cases I doubt they ever read the paper. I know people who looked at the arXiv in the morning to find they have "written" a paper they didn't previously know of. Especially when it comes to conference proceedings, that are often just repetitions of already published results, adding a previous collaborator is more an act of politeness than a statement about the contribution to the act of writing. That won't surprise anybody who works in the field.

In some cases, interpreting the different author's contributions to a paper can be more subtle. Supervisors for example are frequently named as co-authors not because they contributed to the paper but because they are the ones with the grant who made the experiment/research project possible to begin with. The problem here is of a practical sort. With bringing in the money to make a project possible they do arguably make an essential contribution. In return, the funding bodies want to see their money put to good use and having the grantees' name on a paper increases chances for future funding. Especially for researchers too young to apply for grants themselves (application typically requires a PhD), adding the supervisor is thus an act of self-interest. The problem starts at exactly the point when the paper is submitted to a journal and it is declared that all the authors made significant contributions to its content.

How to read an author list is tacit knowledge that differs from field to field. In some fields, the first author is the one who actually did the work, the last author is the one with the grant, and the ones in the middle might be ordered by some obscure ranking. In other fields, author ordering is strictly alphabetically and being a first-author simply an ode to your family name (Abado, A. A. et al).

I was thinking about the meaning of author lists yesterday when I read this ridiculous article in the Times Higher Education: Phone book et al: one paper, 45 references, 144 authors. It can be summarized as: Professor for Ethics comes across a summary-paper from the Sloan Digital Sky Survey and counts 144 authors. Since he hasn't seen such long author lists in his field, he concludes there must be something wrong with physics. Clearly, people like to have their names on such long author lists because "Careers depend on number of publications."

Now I have written many times on this blog, most recently here, that the use of metrics for scientific success can indeed hinder progress and should be done with caution. But the ethic professor's implicit assertion that hiring committees are not able to distinguish between a single-authored paper and a collaboration's summary paper simply shows he has no clue what he's talking about. Even when it comes to the above mentioned papers with few authors, the question who made what contribution is typically (extensively!) addressed in letters of recommendation accompanying a publication list. The reality is that in experimental physics such long, or even longer, author lists are not uncommon. It's simply a consequence of these experiments being enormously complex in the technology and software used. If anything, the THE article shows that comparing ethics to physics is like comparing fruitflies to the homo sapiens. I'll leave it to you to decide which stands for what.

In any case, the obvious solution is that there would be a way to better declare what the author's contributions to a paper were. This has been discussed many times previously, and I am hopeful that sooner or later this will become reality.

On that note, YoungFemaleScientist had a post this week on the Ethics of Publishing, and hits upon more relevant problems caused by the pressure to perform according to a certain success standard. That's the praxis of splitting up papers into "least publishable units" or dumping all sorts of stuff together in the hope that it will overwhelm the referees and something of it will make a splash. The latter is not very common in hep-th though. I guess that's because there are too many people working on too closely related topics, so everybody tries to get even smallest results out as soon as possible because otherwise they risk being scooped.

Thursday, April 08, 2010

OJ 287

OJ 287 is a quasar at a distance of approximately 3.5 Giga light-years with a quite unremarkable Wikipedia entry. I recently heard a talk by Mauri Valtonen from Tuorla Observatory about OJ 287 and was totally enchanted! That quasar deserves better than this little Wikipedia entry, so here's to OJ.

The figure below, from arXiv:0809.1280, shows the apparent magnitude of OJ 287 as recorded over the course of time:

[Click to enlarge]

There is several things one should note about this figure. First, a star's magnitude is a logarithmic measure (similar to the magnitude of an earthquake). That means the variability in the brightness ranges over many orders of magnitude. Second, the first measurements date back more than a century! In his talk, Valtonen explained that to obtain this data they have to sort through the old archives in observatories and find the photographic plates that might have captured the system back then. (Or rather, some student has to do this.)

To look at the physics, even by eye you can see some patterns in the data. There seems to be a long-term variation on about 60 years scale, with one peak at about 1910 and one at about 1970. Then there's a shorter variation with peaks about every 12 years. If you look closely, you can in the newer data also see that the peak is actually a double-peak. The more sophisticated way to extract these regularities is to do a Fourier-analysis of the data, which Valtonen and his collaborators did to quantify these periodicities. (Chad Orzel did a good job explaining Fourier transformation here.)

Sitting on this (and more) data, the physicist of course starts to wonder what might cause this particular variability. So Valtonen and his coworkers set up to develop a model to fit the data, and make predictions to test it. To make a long story short, here's what they came up with: OJ 287 is a binary system. It's a supermassive black hole with an accretion disk, and a smaller black hole in orbit around it. The masses of the objects can be inferred from the data, rspt they are parameters in the model. The slow accretion of the gas in the disk onto the large black hole and the outgoing jet is what makes the object visible. But in addition to this, on its orbit the small black hole will pass through the accretion disk, causing the gas to dramatically heat up. The heated gas flows out on both sides of the disk and radiates strongly for a time of several weeks.

The smaller black hole passing through the accretion disk is what causes the peaks in the brightness. It's a double peak because of the eccentricity of the orbit, and the long-term variation is due to the perihelion shift which for this system is quite sizeable. Below is my sketch of the situation. The red thing is supposed to be the accretion disk, and the black dots are the two black holes.



You find an actual fit to the data in Predicting the Next Outbursts of OJ 287 in 2006-2010, M. J. Valtonen et al 2006 ApJ 646 36. So now that one has a model that fits the available data, the good physicist goes and makes a prediction. That was done in the paper I just mentioned. It should be added that at this time there were other models for the system's variability, such as variations in the accretion disk or a wobble in the jet axis (the jet-axis of that system is almost exactly directed at us, which makes our observation very sensitive to changes in that axis.) So they made a prediction with the binary-system-accretion-disk model that the next outburst would be on September 13 2007, plus or minus 2 days uncertainty. And here's what happened:

[Figure 1 from arXiv:0809.1280]

The black squares is the data, the dashed line is the theoretical prediction. I was so impressed by this! I mean, look, that thing is Giga-lightyears away! There's all sorts of plasma physics and turbulences and weird things going on there. And they manage to predict the day the next outburst will take place.

But it's even better than that because the timing of the burst contains a big chunk of general relativity too: The system loses energy due to the emission of gravitational waves. This causes the orbit of the small black hole to shrink and the rotation to speed up. If the system would not be losing this energy, the September 2007 outburst would have been 20 days later.

References:

Sunday, April 04, 2010

Peer Review VI

Peer review is at the heart of knowledge discovery. It is thus no surprise it is frequently subject of discussion on this and other science blogs. The other day I read a great post by Cameron Neylon, Peer Review: What is it good for? It occurred to me that instead of writing the 124th comment there, it might be more fruitful if we discuss here what I've - oddly enough - repeatedly suggested on other's blogs, but evidently not on my own.

To first make sure we're on the same page, here's how peer review works today. You write a manuscript and submit it to a journal, where it is assigned to an editor. Exactly what happens then depends somewhat on the journal. In some cases the editors sort out a lot of submissions already at this stage. At very high impact journals like Science and Nature, editors reject the vast majority of papers not because they're wrong but because they're not deemed relevant enough, but these journals are exceptions. In the general case, your manuscript will be assigned a number and be sent to one or several of your colleagues who (are thought to) work on related topics and who are asked to write a report on your paper. That's the peer review.

The assignment of people to your paper doesn't always work too great, so they have a chance to decline and suggest somebody else. At least in my experience in most cases the manuscript will be sent to two people. Some journals settle on one, some do three. In some cases they might opt for an additional report later on. In any case, the reviewers are typically asked for a written assessment of the manuscript, that's the most important part. They are also asked whether the manuscript is suitable for that particular journal. And typically they'll have to judge the manuscript on a 5 point scale on how interesting the content/good the presentation is etc. Finally, they'll have to make a recommendation on whether the paper is suitable for publication in the present form, whether revisions should be done, have to be done, or whether it's not suitable for publication. The reviewer's judgement is anonymous. The author does not generally know who they are. (That's the default. There is in principle nothing prohibiting the reviewer from contacting the author directly, but few do that. And in some cases it's just easy to guess who wrote a report.)

In most cases the report will say say revisions have to be done. You then revise the paper, reply to the queries and resubmit. The exchange is mediated by the editor and can go back and forth several times, though most journals keep this process to a few resubmissions only. The whole procedure can take anything between a few weeks and several years. In the end your paper might get rejected anyway. While in some cases one learns something from the reports, the process is mostly time-consuming and frustrating. The final decision on your manuscript is made by the editor. They will generally, but not always, follow the advice of the reports.

On the practical side, the manuscript- and review-submission is typically done through a web interface. Every publisher has their own labyrinth of usernames, passwords, access codes, manuscript-, article tracking-, and other -numbers. The most confusing part for me is that some journals assign different author and reviewer accounts to the same person. I typically write reports on one or two papers per month. That's more than ten times the number of manuscripts I submit. There are very few journals who pay reviewers for their report. JHEP announced some while back they would. Most people do it as a community service.

There are several well-known problems with this procedure. Most commonly bemoaned is that since the reviewers are anonymous they might abuse the report to their own advantage. That might be in the form of recommending the author cites the reviewer's papers (you're well advised to do that). In the worst case they might reject a paper not because they found a mistake but because it's incompatible with their own convictions. One can decline to review a paper in this case due to "conflict of interest," but well. We have all conflicting interests, otherwise we'd all be doing the same thing. What happens more often though is that the reviewer didn't read or at least not think about the paper and writes a sloppy report that might accept a wrong or decline a right paper mistakenly.

I have written many times that the way to address these community-based problems - to the extend it's possible - is to make sure researchers' sole concern is the advancement of science and not other sorts of financial pressure, public pressure or time pressure that might make it seem smart to down-thumb competitors. But that's not the aspect I want to focus on here.

Instead I want to focus here on the practical problems. One is that the exchange between the author and the reviewers is unnecessary slow and cumbersome. Submitting a written report and having it passed on by the editor must be a relic from the times when such a report was carved in stone and shipped across the ocean. It would be vastly preferable if journals would instead provide an online interface to communicate with the author that preserves the anonymity of the reviewer. This would allow the reviewer to ask some quick, clarifying questions and not only make their task easier, but also prevent misunderstandings. Typically, if the reviewer has misunderstood something in your manuscript you have to do a lot of tip-toeing to tell them they're wrong without telling them they're wrong because, you see, the editor is reading every word and knows who they are. The idea isn't that such a communication interface should replace the report, just that it could accompany the review process and that a written report is submitted after some determined amount of time.

Another practical problem with today's peer review process is that when your paper gets rejected and you submit it to another journal, you have to start all over again. This multiplies efforts on both, the authors' and the reviewers', sides. This is one of the reasons why I'm suggesting to decouple the peer review process from the journals. The idea is simply that the author would submit their manuscript not for publication to a journal, but first to some independent agency to obtain a report (or several). This report would then be submitted with the manuscript to the journal. The journal will still have to assess whether the manuscript is suitable for this particular journal, but the point is that the author has a stamp of legitimacy independent from a journal reference. It's up to the author then what to do with the report. You could for example just use the report (in some summarized form) with an arxiv upload.

There could be several of such peer review agencies, and they might operate differently. Eg some might pay the reviewer, some might not. Some might ask for a fee, some might not. One would see which works best. In the long run I would expect the reports' relevance to depend on the reputation of the agency.

This would address several problems at once. One is the by the open-access movement frequently criticized "power of journals." That's because in many, if not most, fields a journal reference is still considered a sign of quality of your work. But journal subscriptions are very costly, and thus the pressure to publish in journals often means scientific studies are not freely accessible for the public. The reason is simply that today subscription journals are the main providers of peer review. But there is actually no reason for this connection. Decoupling publication from the review process would remove that problem. It would also address the above mentioned problem that when your manuscript gets rejected from one journal you have to start all over again with the peer review. Finally, it would add more diversity to the procedure in that the reviewers and authors could chose which agency suits them best. For example, some might have an anonymous, and some an open review process. Time would tell which in the end is more credible.

There is the question of financing of course. Again, there could be several models for that. As I've said many times before, I think if it's a public service, it should be financed like a public service. That is to say, since peer review is so essential to scientific progress, such peer review agencies should be eligible to apply for governmental funding. One would hope though that, as today, most of the time and effort would be carried by the researchers themselves as a community service.

Friday, April 02, 2010

This and That


Thursday, April 01, 2010

Baby Universe Created in Particle Smasher

Geneva, April 1, 2010: In the first collisions at the Large Hadron Collider in Geneva, a small universe has been created. Scientists discuss how to deal with it.

At the Large Hadron Collider at CERN in Geneva, also known as the Big Bang Machine or The Particle Smasher, the first collisions with a beam energy of 3.5 TeV each took place this week on Tuesday morning. But a small surprise remained unnoticed until yesterday. In a corner of the mighty ATLAS detector, cleaning personnel found a small universe: “I wasn't sure if it's organic waste, or if it goes in the grey bin,” says Jessica Nettoyer, first to make the discovery, “So I go and ask the student. And he's like, you know, like. Boah! And runs off with the thing. O tempora, o mores!”

The creation of a universe in a particle collision had been suggested by researchers, but was widely not taken seriously. More accepted by the physics community has been the possibility of creating a small black hole, which, according to some theories could harbor a universe by itself, a so-called “baby universe.” CERN scientists believe this is what happened on Tuesday, though they caution more analysis is necessary.

The baby universe is now 2cm in diameter and has been sealed away under vacuum. Scientists from various disciplines all over the world have been asked for advice. “It's a universe. We should put it into a nice landscape,” says Brian Blue, a leading American string theorist. “It carries the possibility of developing intelligent live,” says Anne-Marie Dogrublaskinfizwysky-Grubowskiwitz, Proffesor for Universal Ethics at the University of Zwinkerliqrskywinsk, “It is unethical to keep it in a laboratory.” The pope has been consulted to develop an action plan in case mankind will raise to the level of god-like creatures.

“It's mind boggling!” Carola Seanning said in a hastily organized meeting yesterday evening. “It means that our universe too could sit in a jar in some lab!” Meanwhile, plans are being made to use the universe's rapidly increasing complexity to develop a super-computer that could solve the halting-problem and even do your tax-return. A CERN spokesperson said: “We are searching for ways to communicate with intelligent creatures that might develop and try to establish means of information exchange. Every input is highly welcome.” For now CERN is looking for a name for the baby universe. You can submit your suggestion in the comment section.