Friday, July 31, 2009

And how open would you want your science?

I just read Dan's recent post "What, exactly, is Open Science?" He names four fundamental goals of open science:
  • Transparency in experimental methodology, observation, and collection of data.
  • Public availability and reusability of scientific data.
  • Public accessibility and transparency of scientific communication.
  • Using web-based tools to facilitate scientific collaboration.

These seem to be rather experimentally focussed, so let me add some words from the perspective of a theorist. Since I just finished reading Surowiecki's "Wisdom of Crowds" (see review), I feel now better equipped to get across something I already said in my post We Are Einstein, so let me quote myself and then explain:

"[A]n environment with a very high interaction rate thermalizes quickly, and can be very destructive in the early stage of an idea's development. A highly connected community means we’ll have to watch out very carefully for sociological phenomena that might affect objectivity, and work towards premature consensus. We will have to watch out for fads that grow out of proportion, and we will have to find a way to protect the young ideas that “you have to ram down people's throats,” in Atkin's words, until people are ready to swallow them. There is no reason to assume scientists are immune to sociological effects."
With the wisdom I gathered out of Surowiecki's book, the point I was trying to make is that sharing too much information and being too tightly connected will actually lead to a dumb rather than a smart community.

Yes, that is right. What I am saying is that all the sharing and openness can actually harm progress. In fact, I think we already share way too much too premature information. The reason is that scientists too are only human. If we hear some colleagues talk who are genuinely excited about a topic, chances are we'll get interested. If we have an idea in an early stage and bounce it off a lot of people, it will lose its edges because we'll try to make it fit. If we hear something repeatedly, we are likely to think it's of some relevance. If we know the opinions of other people, in particular people with a higher social status or more experience, we'll try to fit in. That's what humans do. That's why crowds make dumb decisions. That's how groupthink starts, that's where herding comes from, that's how hypes and bubbles are created. As Surowiecki points out, independence during the opinion making process is essential for an outcome that reflects all the wisdom present in the crowd.

Of course nothing of that applies to you, the superior and entirely rational scientist, because you are different. Funny though that study after study shows scientists are just like all other people.

Dan writes in his post that he wants the incentive structure to be changed such that it supports openness. With that he means "Work. Finish. Publish. Release." Again, this seems specific to experiment (a theory is released when it's published). I do of course agree on the goal, but not on the means. I am generally suspicious about any "incentives" that are supposed to push scientists into doing something they wouldn't voluntarily do. We do have such incentives today. And they are counterproductive. I don't want them to be replaced with other incentives that somebody cooked up on his blog and that likely turn out to be equally counterproductive, though for other reasons. That's why I say the only thing we have to rely on is our own judgement and what we should be doing is to avoid any distortion of the opinion making processes. And for that, we should be paying attention to what advice our colleagues from psychology and sociology have to offer.

Sometimes when I hear Science2.0 fans fantasize about the brave new world they want to create, one in which every scientist throws his thoughts into a vast global pool of knowledge and thousands colleagues contribute and advise, I get really scared. For all we can tell from current knowledge, the result will be a combination of streamlining and self-supporting fads. What scientists really need is more time and more freedom to play with their ideas without pressure to fit in, to publish, to make up their mind.

Thus, my bottomline is always the same: You can dream up any 2.0 utopia you want. But in reality it will be populated with imperfect, irrational humans. If you're not taking into account well studied sociological and psychological effects your utopia will be a dystopia. Science can be too open.

Tuesday, July 28, 2009

Book Review: “The Wisdom of Crowds” by James Surowiecki

“The Wisdom of Crowds”
By James Surowiecki
Anchor; Reprint edition (Aug 16 2005)

James Surowiecki’s book is an entertaining summary of many, recent and not so recent, studies on crowd behavior. The book comes with many references, and provides quite a balanced assessment of current knowledge. Unlike what the title might suggest, Surowiecki’s book is not a praise of “The Wisdom of Crowds,” but rather an examination under which circumstances crowds are wise, and for which purposes this wisdom might be useful.

Surowiecki distinguishes between three different problems posed at a crowd: problems of cognition, coordination and cooperation. The book is divided into two parts. The first part offers a lot of examples for these problems and the crowds’ attempts to solve them. The second part looks into the question which conditions are necessary for a successful solution. The author identifies three such conditions: diversity, independence (of individuals from each other), and decentralization – though with qualifiers:
“[D]ecentralization works well under some conditions and not very well under others. In the past decade, it’s been easy to believe that if a system is decentralized, then it must work well. But all you need to do is look at a traffic jam – or, for that matter, at the U.S. intelligence community – to recognize that getting rid of a central authority is not a panacea. Similarly, people have become enamored of the idea that decentralization is somehow natural or automatic. [However,] it’s hard to make real decentralization work, and hard to keep it going, and easy for decentralization to become disorganization.”

This paragraph makes clear that understanding the ways crowds make decisions is necessary to set up a system such that decision making is smart. Intelligent organization requires thinking – and scientific research.

Surowiecki warns of factors that dumb down the decisions of groups, most notably skewing information, groupthink, and herding, all of which lead to suboptimal decisions, and potentially disastrous failures.

The cases discussed in the book draw on many examples, the recurring ones are betting markets, and the financial and economic system. The author also dedicates a chapter to the academic system, and ends with discussing politics. While the elaborations on the financial markets are extensive and insightful though somewhat repetitive, those on politics are well meant but vague, and those on academia are hopelessly naïve:

“The coin of the realm, for most scientists, is not cash but rather recognition. Even so, scientists are undoubtedly as self-seeking and as self-interested as the rest of us. The genius of the way science is organized, though, makes their self-interested behavior redound to the benefits of all of us. In the process of winning notoriety for themselves, they make the group – that is, the scientific community and then, indirectly, the rest of us – smarter.”

The chapter comes with very few, basically irrelevant references, and leaves me with the impression the author has next to no experience with academic research. Unfortunately, “the genius of the way science is organized,” is today severely affected by various pressures researchers are subject to in their decision making process, most notably financial and time pressure. That, together with, herding, lacking independence, a meanwhile completely thermalized information basis, and specialization and fragmentation which promotes groupthink, several of the factors Suroviecki previously identified as necessary for smart decision making are not fulfilled. And that doesn’t even touch on the question how well the “smartness” of the scientific community reaches “the rest of us.” For more details, see my post We have only ourselves to judge each other – which suggests the steps to be taking to allow the scientific community to be indeed a decentralized, smart crowd.

A central theme of “The Wisdom of Crowds” is that leadership by a single or few persons is unlikely to be superior to using the full available knowledge (of the group, the company, the community, ie the crowd). Humans tend to assign success and failure to single persons where it might instead have been simply a result of lucky or unlucky circumstances. A consistently better performance, so the author argues, cannot be achieved by picking “the” right person, but by accessing and aggregating the wisdom of the crowd.

It remains somewhat unclear throughout the book what role Surowiecki assigns to experts, though he writes in the afterword that the expert is of course necessary to provide information, for without information neither a crowd nor anybody else can make qualified decisions. He cautions however that many studies have shown that specialists - as all other people - tend to be overconfident about their knowledge and fall for the “Illusion of Knowledge,” ie they fail to see the limits of their own knowledge. If the role of the experts is to provide the crowd with information, then one should keep in mind that the way how information is communicated can spoil the ability to make good decisions, eg by making information seem more (or less) important than it actually is, or by attaching irrelevant details like what specific persons thought about it. (And let’s not even talk about the problem of simply inventing information.)

In any case, the book makes a very compelling case that there is a large unused potential in the wisdom of crowds and that, if we know how to tap onto this potential, we could use it to improve decision making processes in certain situations. Especially scientists as members of large communities, and as members of academic institutions, can learn a lot from better understanding under which circumstances which decision making processes have been successful. The book conveys an optimistic, but also a cautious message, since there are many examples of stupid crowds as well. While the examples in the book fit very well into the argument the author is leading, I am (as often) left to wonder whether there are examples that did not fit into the theme and thus are not to be found in the book.

All together, “The Wisdom of Crowds” is a very recommendable book, informative and well written. If this was an Amazon review, I’d give five stars.

Read my other book reviews.

Monday, July 27, 2009

Röser's equation

At low temperatures, some materials display a feature known as "super-conductivity" - the total loss of electric resistance. Electrical currents in such a material, once initiated, propagate ideally forever. Super-conductivity sets in at a so-called "jump temperature," below which the material changes its electric properties. "Low" temperature in this case means indeed very low: The phenomenon was first observed on mercury with a jump temperature of only 4.19 Kelvin. Other jump temperatures for metallic compounds reach up to 23 Kelvin in Nb3Ge. Bardeen, Cooper and Schrieffer got a Nobel prize in 1972 for their theoretical explanation of super-conductivity in these materials.

These temperatures seem way too low to ever make super-conductivity useful for daily life. But over the last two decades an increasing amount of substances has been discovered, the so-called "high temperature supra-conductors," (HTS), starting with the discovery of super-conductivity in barium lanthanum copper oxide at 35 Kelvin by Georg Bednorz and K. Alexander Müller in 1986, to the recent class of so-called iron pnictides.

For them, the jump temperature can be much higher, typically above 30 Kelvin, with the current record at 138 Kelvin hold by a ceramic oxide containing thallium, mercury, copper, barium, and calcium. However, this search has up to now been rather erratic with groups of researchers more or less systematically testing promising classes of laboratory-grown crystals. The jump-temperature of a crystal so far could not be predicted.

It is then quite astonishing that Hans-Peter Röser, professor at the Institute of Space Systems at the University of Stuttgart, Germany, found a simple equation relating the geometric structure of a crystal to its jump temperature. Röser's equation for the (critical) jump-temperature Tc reads
    4 π k me(2 x)2 n-2/3 = h2/ Tc

where k is Boltzmann's constant, h is Planck's constant, me is the electron mass, x is the doping distance of the crystal and n is the number of supra-conducting layers in the crystal (it is usually 1,2, or 3). If one plots these quantities for known HTS one obtains the following graph, where the straight line is the prediction of Röser's equation:


(from: A Correlation Between Tc of Fe-based HT Superconductors and the Crystal Super Lattice Constants of the Doping Element Positions by Felix Huber, Hans Peter Roeser, Maria von Schoenermark, Proc. Int. Symp. Fe-Pnictide Superconductors, J. Phys. Soc. Jpn. 77 (2008) Suppl. C, pp. 142-144)

Now, neither Stefan nor I are specialists in super-conductivity, but this relation is quite interesting for it may harbor the possibility to better direct the search for high-temperature supra-conductors. Though one is left to wonder whether there is a way to know if a material will be super-conducting at all, it is intriguing how well the data points fit to the curve. It is however not clear to us whether the above shown curve depicts all known data points or only those that fit nicely, and how big the uncertainty of the quantity x is. In any case, the proposed relation is purely heuristic, it was obtained from accumulated data rather than derived from a theoretical model.

The relation was published in Acta Astronautica 62 733-736 (2008) and J. Phys. Soc. Japan Suppl. C 77 142-144 (2008).




Update: Check out also the continuation of the post, Röser's equation, again

Friday, July 24, 2009

Copernicium and the Island of Heavy Nuclei

The Heavy Ion Society (Gesellschaft für Schwerionenforschung, GSI) in Darmstadt, Germany, has dedicated its mission to slamming together heavy nuclei to produce even heavier nuclei.

For that purpose they use UNILAC, the 120-meter long Universal Linear Accelerator which accelerates ions to 20 percent of the speed of light to smash them on lead targets, and SHIP, the Separator for Heavy Ion reaction Products, an electromagnetic separator and detector assembly which is used to analyse the reaction products.

During the last decades, the GSI has thus become known with their discovery of new chemical elements in the periodic table. The latest one, element 112, has now been named "Copernicium," after astronomer Nicolaus Copernicus.

Previous ones were named Bohrium (element 107, named after Niels Bohr), Hassium (element 108, after the Latin name for the state Hesse, where GSI is located), Meitnerium (element 109, after Lise Meitner), Darmstadtium (element 110, after the city Darmstadt), and Roentgenium (element 111, after Wilhem Röntgen). These elements have a half-life ranging from seconds to minutes.

The element 112 (112-277) has been produced for the first time in fusion reactions of zinc-70 (with proton number 30) projectiles with lead-208 (proton number 82) targets. In a three weeks' experiment which ran 24 hours per day, one nucleus of element 112 was first observed on February 9, 1996. The nucleus disintegrates in a series of α-decays, which allows its identification:



The half-life of the new element Copernicium is not yet clear due to lacking statistics. But the discovery could be reproduced in other laboratories in Russia and Japan. Here is a photo of the proud discoverers (Credits: A. Zschau, GSI):



Sigurd Hofmann, the leader of the SHIP group at GSI, has given a talk about these dicoveries at the American Chemical Society meeting in San Diego, in 2001, to which you can listen here.

Besides the fun it brings to slam together heavy things, these experiments have the scientific purpose of better understanding the structure of elementary matter. Eventually, you know, physicsts want to derive all chemistry from QCD, but we're far away from that. The heavy ion beams produced at the GSI have also been used since 1997 for cancer treatment.

One theory that has been around for several decades is that at sufficiently high number of protons and neutrons the stability of elements will increase dramatically. In the periodic system, this patch has been dubbed the (conjectured) "Island of Heavy Nuclei." Its position shifted a bit with new models for nuclear structure, but it's still believed to be there, somewhere above atomic number 120.

I've always found this intriguing. Imagine, once we've crossed the valley of short-lived elements, we could produce some elements stable enough to form molecules and create chemical reactions that aren't taking place by natural processes anywhere in the known universe. Granted, serious nuclear physicists don't believe they would be that stable, but likely undergo spontaneous fission. But still, it would make for a nice science-fiction scenario, wouldn't it? A completely new kind of chemistry.




The New Element 112.
Zeitschrift für Physik A 354, 229-230 (1996), DOI 10.1007/BF02769517.

Wednesday, July 22, 2009

This and That

  • Next Wednesday, July 29th, the The MaRS Centre, in Toronto will be hosting an event about Science 2.0 "What Every Scientist Needs to Know About How the Web is Changing the Way They Work," More information here. [Via Jen]

  • The Anacapa Society, dedicated to providing networking opportunities for theoretical physicist based at primarily undergraduate institutions, has found a permanent residence at Amherst College, Massachusetts, USA. They will also be holding a first workshop for theoretical and computational physicists this summer, August 17-20. [via Arjendu]

  • This weekend, scientists from the Helmholtz research centre DESY, in Hamburg, Germany, generated the first X-ray light for research at the new synchrotron radiation source PETRA III. This means that the most brilliant storage ring X-ray source in the world is now available for experiment operation. Full press release here. [Thanks to Stefan]

  • Elsevier is redefining the scientific article. [via James Dacey]

Monday, July 20, 2009

Hello from Stockholm

Loyal readers of this blog know that I'll be moving to Stockholm in September. I was thus looking for an apartment in the vicinity of the Swedish capital the previous days. If you haven't been in Stockholm before, it is definitely worth a visit. It is a culturally very interesting city and charming in addition. The below photo was taken South-East of the City Hall (exactly here).



And this is the view from inside the City Hall



This is one of the narrow streets in Gamla Stan



And here is a random green near the place where I'll probably be moving, just because the photo turned out to be really nice

Saturday, July 18, 2009

Hubble 3D

The IMAX Corporation, NASA and Warner Bros. Pictures are working on a 3D movie about the Hubble telescope. According to the blurb "it will chronicle the amazing saga of the greatest success in space since the Moon Landing. Featuring stunning on-orbit coverage of the telescope's final repair and jaw-dropping IMAX 3D flights through distant galaxies, Hubble's astonishing legacy will be captured for generations to come." It is scheduled to be released Spring 2010. More information on this website. Here is some amazing video clips from YouTube that might or might not be related to the movie






Thursday, July 16, 2009

What is Fundamental?

As previously mentioned, I was recently at the FQXi conference on the Azores. FQXi, the "Foundational Questions Institute," has the mission "to catalyze, support, and disseminate research on questions at the foundations of physics and cosmology." I work at an institute whose research is "devoted to foundational issues in theoretical physics." Fundamental, foundational, basic – what do we mean with that? What should we expect from a fundamental theory? What are the foundational questions? This was one of the questions we discussed at the FQXi conference, and while several of the participants contributed, I don’t want to blame any of them for the following summary.

So what is Fundamental?

A theory is fundamental if it cannot be derived from another, more complete, theory. More complete means the theory is applicable to a larger range. Note that a fundamental theory can be derivable from another theory if both are equivalent to each other (though one could plausibly argue then one should consider both the same theory).

Throughout history, the search and discovery of more fundamental theories in the natural sciences has lead to a tremendous amount of progress. That however is not a guarantee it will continue to be the path to progress. The issue is in the expression “cannot be derived” which could mean three different things:

Cannot be derived, version I: not possible in principle.

It might not be possible because it is not possible. Believers in reductionism think this is not the case for the laws of Nature we presently know: they should all follow from one most fundamental "Theory of Everything." While it is true that reductionism proved to be very useful and we thus have good reasons trying to continue it, there is no knowing the laws of Nature always allow a reduction. We would then be left with layers of theories that describe Nature on various scales that cannot ever be derived from each other, and thus have to be considered equally fundamental. While we presently don’t have evidence for this, it is a self-consistent point of view.

In the previous post on Emergence and Reductionism, I explained this is known as “strong emergence:” Emergent features on a higher level require a theory that cannot be derived from the underlying one. We previously discussed the paper “More really is different,” in which Gu et al offer an example for a system that does have emergent features, but it can be proved these are not derivable from the underlying theory. Granted, the system they consider isn’t particularly natural (see discussion on earlier post), but it gives you an impression of what this case means.

Cannot be derived, version II: not possible in practice

It might not be possible to derive emergent features from a more fundamental theory because of practical constraints. For example, it might take more computing power than we will ever have available, or more time than the lifetime of the universe to do it. It might take infinitely precise knowledge of initial conditions; it would make it necessary to measure parameters more precisely than we can plausibly expect ever; it would take a detector the size of the galaxy; etc etc.

Cannot be derived, version III: not yet possible

We might simply not have a derivation because we are too dumb the current knowledge isn’t sufficient, but we might find a derivation with more research.

Okay, now what is fundamental?

The problem is that at any one time we might not know which of these 3 cases we are dealing with. The exception is if we had an actual proof for the impossibility of a derivation. (But then a proof is only as good as its assumption.) We are thus left with our assessment of the situation, which might change with better understanding of the theories we have. In some cases there is a pretty clear consensus on whether a law is fundamental, in other cases it might not be so clear.

Examples

  • Take for example the Tully-Fisher relation. It relates the luminosity of a spiral galaxy with the 4th power of its rotational velocity. It is a useful heuristic relation, extracted from data, and has predictive power. There is no derivation of that relation; yet I doubt any physicist would argue it is a fundamental law. Instead, with increasing understanding of astrophysical processes, we will finally be able to derive it.
  • Stefan came up with an interesting historical example, the Titius-Bode law according to which the distance of planets to the sun grows exponentially with their order in the sequence. The law works pretty well up to Uranus and fails with Neptune, but the far out planets were not known when the law was suggested. People once thought the planets' orbits are fixed by fundamental principles, but with better understanding about the gravitational interaction, the "law" was downgraded to a "rule," or possibly just a coincidence. Though with further knowledge about the dynamics relevant for the formation of solar systems the approximate validity of the relation might be an "emergent" feature one can expect to approximately be valid.
  • Then there is of course the often discussed question whether it is in principle possible to derive all of biology, psychology, sociology and economics from physics and thus physics is the most fundamental of all sciences. Many physicists believe this to be the case. For that reason, one of my profs used to refer to physics as “the queen of sciences” (physics is a female noun in German). But we are far away from practically achieving such a derivation, and we thus do not actually know which of the three cases of “cannot be derived” we are dealing with. Already at the level of proteins things get murky, and we should be considering the option that indeed biology might be as fundamental as physics in the sense that it cannot be derived - cannot be derived in principle, not ever.

One of the reasons why the first case might apply even though reductionism has worked so well over a large range of scales is that in some areas of science the separation of scales might no longer work, and/or there might be no scale that can be used for separation. In physics typically the scale is energy, and we are used to neglect things that happen at energies much higher (wavelengths much smaller) than what we are probing. We know this is a safe procedure backed up by the framework of effective field theories. In contrast, a system like our societies does not simply have higher level organizations constituted out of smaller elements, such that these smaller elements define the "emergent" properties. Instead, these organizations also act back on the elements that they are built of and change their behaviour.

Coming back to physics, there are of course the questions that are hotly discussed at the front of research today, those asking what is fundamental in our present theories. Can the masses of particles in the Standard Model be derived from a more fundamental theory? Are space and time themselves emergent from an underlying theory (generally expected to marry quantum mechanics with general relativity). Is quantum mechanics fundamental, or can the quantization procedure and the measurement prescription be derived from a more complete theory?

I don’t know. But I really, really want to know.

Aside: Some weeks ago Clifford also wrote about the question what is fundamental, anyway? Since he sent me the link to make sure I don’t miss it, I can’t get away without mentioning it. Clifford is mostly concerned with people who use the label “more fundamental” to mean their work is more relevant. While that might happen, people using superlatives to claim their own work (life, opinion) is “more this” or “more that” than others’ is hardly remarkable, and certainly not specific to theoretical physics. The other point Clifford makes is that “Nature recycles good ideas,” meaning that the framework of fundamental theories can often also be found to be useful in non-fundamental areas - and the other way 'round. It is an interesting point, but it addresses more the question where one can find inspiration, not what is actually fundamental.

Bottomline

A theory is fundamental if it cannot be derived from a more complete theory, yet there are different reasons for why we may not be able to derive it: It might not be possible in principle, it might not be possible in practice, or we might not yet have the sufficient knowledge to do it. In general, we do not know which case we are dealing with. Misjudgement of the situation can waste a lot of time and hinder progress. If we wrongly believe a property is not fundamental, we risk searching forever for a more fundamental explanation that doesn't exist. On the other hand, if we believe something is fundamental even though it isn't, our understanding of Nature will remain limited. What is sure though is that understanding always starts with a question.

Saturday, July 11, 2009

FQXi on the Azores

As previously mentioned, I am currently at the FQXi conference in Ponta Delgada on San Miguel, Azores. Some of you correctly noticed the only purpose of this meeting is to annoy everybody who isn't here. San Miguel is stunningly beautiful, though the weather has been mostly cloudy and rainy. On the other hand, it doesn't hurt so much then to sit in a deep-frozen seminar room. The Azorians seem to live mostly from agriculture and tourism. They have a university, but it's distributed over three islands and focuses on the hands-on fields, engineering, economics, medicine etc.



It has taken me a while to pin down why I find the meeting considerably more interesting than conferences I usually go to. One reason is certainly the variety of topics. Most conferences these days feature a monoculture of specialists in one particular area. And while that is very efficient for exchanging recent results, it also becomes repetitive rather easily. Here, we have a mix of "fundamental" fields, covering cosmology, quantum mechanics, quantum information and quantum gravity. There are also some philosophers here, and one experimentalist. (I think it's by accident rather than design he's the only one.)

But what really stands that the topics discussed are bold. There's Olaf Dreyer speculating on the origin of space and time from matter, Paul Davies on the relevance of increasing complexity in the evolution of the universe and the role of the observer, Anthony Agiurre about false vacuum bubbles, Eduardo Guendelman on the possibiliy to create universes, Fotini Markopoulou on the emergence of space from a non-geometric phase in the early universe, and Louis Crane on black holes as power sources. Depending on your taste, you might call them courageous or nuts. Laura Mersini-Houghton who talked about the possibility of finding evidence for the multiverse through entanglement on super-horizon scales is one of the more conservative here. There is also lot of talk about the arrow of time, alien civilisations, and the Future of Sex.

I also met another blogger many of you will know, Scott Aaronson, who turned out to be younger and more, oohm, entertaining than I thought

[Scott Aaronson]

Then there are the usual suspects, Garrett Lisi, Julian Barbour, Max Tegmark etc. You find the full list of participants here.


[Julian Barbour and Olaf Dreyer]


I further meet Zeeya Merali who wrote the article for New Scientist on Garrett's "Exceptionally Simple Theory of Everything." Zeeya turned out to be a very charming young women with a PhD in physics who finds writing about science a great way to follow her fascination with physics. Over dinner, we had an interesting discussion about science journalism and science blogging.

The multiverse is a recurring issue that people seem to be very divided on, both with respect to existence and interpretation. Interestingly, on the list with questions that were thought to become/remain relevant within the next ten years "Understanding String Theory" ended up having one of the lowest scores. I'm not sure though this reflects more than the interests of the participants.

Yesterday we had a group discussion on what is "fundamental," I will tell you about that some other time.

You find a lot more details on the conference and the talks on the FQXi blog.

Friday, July 10, 2009

The Future of Sex

I'm here on the Azores at the FQXi conference. It is a tremendously interesting meeting. For once, I don't get weird looks when I express my believe that Quantum Mechanics will turn out to be not fundamental or that our understanding of human consciousness will be relevant to avoid a stagnation of progress in science.

Yesterday we had an excursion and got to see some of the beautiful scenery of San Miguel. Our organizers, Max Tegmark and Anthony Agiurre clustered us in groups, and assigned us with the task of coming up with future scenarios, what is likely or unlikely to happen within 10 and 1000 years. They are still working on bringing the results into a useful format; I guess they will appear on the FQXi website sooner or later. I just want to pick out one of the more amusing points that was brought up by Paul Davies' wife Pauline (if I recall correctly): Will we get bored of sex? (Audience question: "Do you have any evidence for that?")

It doesn't quite fit into the category of "fundamental" questions one expects at a theoretical physics conference, but I think it's an interesting point. In times where more and more women make use of in vitro fertilization and artificial insemination, when we have good chances of producing artificial sperm rendering men obsolete altogether, will we continue to have sex or will it become evolutionary redundant and, on the long run, uninteresting? Will this happen within the next 1000 years?

I think it is unlikely that within 1000 years such a dramatic change of evolution and natural selection would be completed. I find it possible however mankind will split into two branches, the one making use of biological and technological enhancements modern science is offering, the other rejecting these changes to human nature. In the long run, one of these will turn out to be more successful, but 1000 years are not sufficient to settle that. I generally think many people are underestimating the wisdom of Nature and overestimating human ingenuity, thus the probability something will go dramatically wrong when we start designing humans is pretty high.

See also: The Future of Rationality

Tuesday, July 07, 2009

Drained Brains

Economist Ahmed Tritah published the results of an interesting study on the brain drain phenomenon, the migration of European scientists to the USA:


The study is based on American census statistics dating from 1980 to 2006 and examines the education, experience and labor quality of the migration flow across the ocean. In brief, he finds that the people who leave are over-averagely well educated and skilled compared to the population at the source country, and that the loss hurts. Not so surprisingly, he also found that countries which increased their spending in Research and Development experienced lower rates of expatriation to the United States.

It is unfortunate the available data ends 2006, since a lot has changed since and I would be interested to see newer statistics. The European Research Council and Germany in particular started many initiatives to counter the brain drain and it's been sufficient time one could see first effects.

Saturday, July 04, 2009

How about a Science Fiction Journal?

I think we should have a journal dedicated to scientific fiction. Occasionally, I stumble on the arxiv across a paper that actually wanted to to be a science fiction story. It is scientifically accurate but somewhat far fetched - and begging for a narrative.

A good example is the Learned et al's recent paper on The Cepheid Galactic Internet, arguing that aliens would use modulation of Cepheid variable stars to encode messages to other civilizations. The Dyson Sphere is an older example. A more recent one, Hsu and Zee's suggestion that the CMB contains a messages. Various papers on warp-drives, wormholes and time-travel would also qualify. What do you think?

Friday, July 03, 2009

This and That

  • The Economist has a nice article on The Underworked American:
    "Americans like to think of themselves as martyrs to work. They delight in telling stories about their punishing hours, snatched holidays and ever-intrusive BlackBerrys. At this time of the year they marvel at the laziness of their European cousins, particularly the French. Did you know that the French take the whole of August off to recover from their 35-hour work weeks? [...]

    But when it comes to the young the situation is reversed. American children have it easier than most other children in the world, including the supposedly lazy Europeans [...]

  • Did you notice we're in the middle of a global pandemic? Here's California's reaction: Drive Through Doctors, see "The Doctor Will See You At The Next Window."

  • Nature has a special on Science Journalism, accompanying the 6th World Conference of Science Journalists from 30 June-2 July 2009 in London, and to "shine a spotlight on the profession in changing times." It contains several interesting pieces, for example Boyce Rensberger's essay "Science journalism: Too close for comfort." (Thanks to George for sending the link!) Rensberger's essay is a brief historical account "to reflect on how far the profession has come since its beginning." (Occasionally a bit too far?) He closes with saying
    "We are obviously now in the 'Digital Age', and the very definition of journalism is changing in uncertain directions. Science journalism has moved from working for the glory of the scientific establishment to taking back its independence and exercising a new responsibility to the public. Now, traditional news outlets are withering, leaving many journalists to self-publish online with total independence and a direct connection to the public. But scientists too can use the web, bypassing journalists altogether and taking their science — and their agendas — directly to the public. It is becoming increasingly difficult for readers to tell which sources are disinterested and which have an axe to grind.

    If science journalists are to regain relevance to society, not only must they master the new media, they must learn enough science to analyse and interpret the findings — including the motives of the funders. And, as if that were not enough, they must also anticipate the social impacts of potential new technologies while there is still time to make a difference."

    Also recommendable is the editorial "Filling the void" (comments in square brackets added):
    "[S]cientists are blogging in ever increasing numbers [are they?], and the most popular blogs draw hundreds of thousands of readers each month [#visitors not equal #readers]. These blogging scientists not only offer expertise for free, but have emerged as an important resource for reporters. A Nature survey of nearly 500 science journalists shows that most have used a scientist's blog in developing story ideas [sure, it's all our own fault] ...

    Sadly, these activities live on the fringe of the scientific enterprise. Blogging will not help, and could even hurt, a young researcher's chances of tenure [Who want's tenure anyway?]. Many of their elders still look down on colleagues who blog, believing that research should be communicated only through conventional channels such as peer-review and publication [petroglyphs!]. Indeed, many researchers are hesitant even to speak to the popular press, for fear of having their carefully chosen words twisted beyond recognition [once bitten, twice shy].

    But in today's overstressed media market, scientists must change these attitudes if they want to stay in the public eye. They must recognize the contributions of bloggers [YES!] and others [others??], and they should encourage any and all experiments that could help science better penetrate the news cycle. Even if they are reluctant to talk to the press themselves, they should encourage colleagues who do so responsibly [pass the buck]. Scientists are poised to reach more people than ever, but only if they can embrace the very technology that they have developed [the spirits that we called...]"

    See also our earlier post Do we need Science Journalists?

  • Paul Fendley was offended by a "the" in Lee Smolin's book "The Trouble With Physics" and thus offers us Five Problems in Physics without the Definite Article. It must totally suck to be a writer. Thanks to Matt for the link.
    You find my top 10 unsolved problems in physics here. Note absence of definite article. Now do I qualify to write a book or what?

Thursday, July 02, 2009

Giant Thistle

Do you recall the day you sat on the thistle? Would you have thought these things can outgrow you?


That plant is about 2m high. Would make for a nice Christmas tree.

Wednesday, July 01, 2009

Why are modern scientists so dull? And why that question is nonsense.

Bruce Charlton, Professor of Theoretical Medicine at the University of Buckingham, wrote an essay
    Why are modern scientists so dull? How science selects for perseverance
    and sociability at the expense of intelligence and creativity

I stumbled across this on Information Processing, you can download the PDF here.

After reading the paper, I felt the need to check the Elsevier logo on the PDF is not a fake. It isn't. The thing got published in the journal Medical Hypotheses, Volume 72, Issue 3, Pages 237-243. Prof. Carlton btw is Editor in Chief of this journal.

Summary

The argument the author puts forward can be roughly summarized as follows.

Modern scientists are "intellectually dull" and "lack scientific ambition." The reason for this, so Charlton, is a failure of the selective process in the academic system. He argues that the education of scientists is taking increasingly longer. As a result, being a scientist nowadays requires "an almost superhuman level of [...] perseverance - the ability to doggedly continue a course of action in pursuit of a goal, over a long period and despite difficulties, setbacks and the lack of immediate rewards (and indeed the lack of any guaranteed ultimate rewards)."

A near-synonym for this perseverance that he uses throughout the paper is the Big Five personality trait called "Conscientiousness". (The "Big Five" is a fairly common personality test that you can do yourself eg here). Besides Conscientiousness, Charlton writes, scientists today need to score high also on a second Big Five personality trait called "Agreeableness."

In the rest of the paper he argues that actually relevant for scientific success is a combination of three different factors: Most importantly, the IQ. And besides the IQ, creativity and "transcendental truth-seeking." If scientists are selected because of other qualities than these, then the average IQ of scientists isn't as high as it could be and their research not as revolutionary as should be. And that is then the reason why scientists are so agreeable, so conscientious, so uncreative. Or in one word: dull. That bothers Charlton because science has a need for "revolutionary scientists" (Greetings from Kuhn) and we thus have a lack of these people. Instead we have an overdose of the normal scientists. Revolutionary science is what the NSF calls "transformative research".

After elaborating on the importance of a high IQ, Charlton claims we should be looking for people with a low score of conscientiousness because "working on your own problem requires much less perseverance than working hard for many years at non-scientific problems, or working hard for many years at other people's scientific problems."

In one section Charlton writes that creativity has shown to be positively correlated with psychoticism, and even though "high levels of psychoticism are maladaptive," "low psychoticism would therefore be a desirable trait for normal scientists, but undesirable for revolutionary scientists." In the following section, he further makes a case for "asocial and awkward individuals," which he means to be the opposite of "agreeable". (This was the part of the paper that put off Chad, see discussion at Uncertain Principles.)

In the light of his elaboration, educational achievement is then no longer a reliable factor to determine a student's promise. Charlton thus talks into existence the following relation

Educational attainment ≈ IQ x Conscientiousness

Since he claims that low conscientiousness is what distinguishes the "revolutionary" scientist, one then wants to measure this factor. A tedious calculation yields

Conscientiousness ≈ Educational attainment/IQ

Thus, what one should measure is simply a student's IQ, and look at their grades. If their grades aren't as good as their IQ suggests, then they are "under-achievers" and thus promising revolutionary scientists. It is noteworthy that it does occur to the author such a procedure to select scientists has the slight problem that it's not so hard to fake bad grades. His comment is "[A] person could make themselves look like an 'underachiever' by deliberately messing up their exams [...] - however this would only be achievable at the cost of lowering their exam results, which is not often going to be a helpful thing to do so."

Leaving aside that the content of the sentence is close to nil, it neglects the fact that if you'd listen to Bruce Charlton, it would become a helpful thing to cheat on your exams.

Comments

Since, as you know, the failure of the academic system to select the most promising scientists is a pet topic of mine, this can't be left uncommented.

First, the starting point of the whole article is unwarranted. Where is the evidence that something is wrong with modern science? How do you know that we have too few "revolutionary" scientists and too many "normal" scientists? This lacking basis, incidentally, is the same problem I have with Lee Smolin's call for more "risky" research. While I am sympathetic to the argument and personally tend to agree, it's not a scientific statement and anecdotes can't replace data. How do we know it's worse today than yesterday? Who determines whether we need more "revolutionary scientists?" Will somebody calculate a percentage? Who? Based on what? And wouldn't one expect that to depend on the field of research? And on the status of that field?

Second, it is highly doubtful low conscientiousness is beneficial for "revolutionary" science. Charlton's argument is based on his believe that "self-chosen problems provide much more immediate reward," thus requiring a lower level of conscientiousness. Unfortunately, this claim is just bluntly wrong. If you chose a problem yourself, if you are "non-agreeable" and left to your own devices, you better score high on perseverance and conscientiousness, and have a high capability to cope with frustration. I have no clue how Charlton came up with this assertion. In contrast to most of the other claims that he makes, this one is not backed up by any reference.

Third, note Charlton's claim is not merely that revolutionary scientists do not necessarily need a high level of conscientiousness, but that they need a low one, meaning conscientiousness must be understood as actually being harmful to their research.

Forth, any claim that the most promising scientists can be identified by measuring some numbers assigned to their name by itself limits the possibility for revolutions. You may be oh-so sure measuring three relevant factors will reliably select the best scientists, but I might disagree. Who are you to decide what's good for science?

Fifth, and what about that thing called "transcendental truth-seeking?" Let us see what Charlton has to say about that: "A further vital ingredient is necessary: that elite scientists must have a vocational devotion to transcendental values of truth," and "Great revolutionary science is therefore a product of transcendental truth-seeking individuals working in a truth-seeking milieu," and "detecting truth-seeking, requires a scientific system that explicitly and in practice values transcendental truth-seeking." That sounds all well and fine, just that lacking any explanation what "trancendental truth-seeking" is supposed to mean, you could replace "truth" with "banana" and not change the scientific content of these statements. Charlton further claims that "science nowadays [...] lacks the living presence of such transcendental values." I occasionally feel like some of my colleagues values' are a little to transcendental. I guess that means I'm a very dull and normal scientist. Dooh.

Bottomline

The problem Charlton runs into is the same problem all other such attempts to fix the academic system run into. They attempt to define absolute criteria for "success" or "good research," and fail to see that the definition of such criteria itself will work against their goal. Whenever you define a criterion, whenever you fix a percentage, whenever you claim we need more of that and less of that, you are twisting knobs on a system that works best without any twisting. It works without method, and it works without measure.

I argued previously there is no better way to do science than to let scientists do it themselves and just to make sure the research process isn't affected by external pressures. Scientists themselves are well aware of the need for revolutionary science/risky projects/transformative research. They also know brilliant people can be complicated. They know the value of disagreement. They are smart people and most of them know who Kuhn, Feyerabend and Popper are. They are in academia because they are dedicated to science and truth-seeking. The problem is not that they don't know what to do. The problem is that "the system" does not allow them to follow their instincts and various sorts of pressure (most notably financial and time pressure) deviate their interests. This in turn has consequences for the selection process. In the long run this can lead to a detrimental population of the academic research environment.

More details in my earlier post We have only ourselves to judge each other.

For completeness, here's my Big 5 Results, and I'm INTJ.