Tuesday, March 25, 2014

Does nature hide strong curvature regions?

Quantum gravitational effects are strong when space-time curvature becomes large, so large that it reaches the Planckian regime. Unfortunately, space-time around us is barely curved. For all practical purposes, you sit in a flat space-time. This is why you don’t have to worry about post-post Newtonian corrections if you ask Siri for directions, but also why it takes some experimental effort to detect the subtle consequences of Einstein’s theory of General Relativity – and that’s the classical case. In the almost flat background around us, quantum effects of gravity are hopelessly small.

But space-time curvature isn’t small everywhere. When matter collapses to a black hole, the matter density and also the curvature become very large, and eventually, long after you’d been spaghettified by tidal forces, reach the regime where quantum gravitational effects are sizeable. The problem is that this area, even though it almost certainly exists inside the black holes that astronomers watch, is hidden below the black hole’s horizon and not accessible to observation.

Or is it? Could there be strong curvature regions in our universe that are not hidden behind event horizons and allow us to look straight onto large quantum gravitational effects?

The “Cosmic Censorship” conjecture states that singularities which form when matter density becomes infinitely large are always hidden behind horizons. But more than 40 years after this conjecture was put forward by Roger Penrose, there is still no proof that it is correct. On the contrary, recent developments, supported by numerical calculations which were impossible in the 1970s, indicate that singularities might form without being censored. These singularities might be “naked”, and yes that is the technical expression.

It has been known for a long time that General Relativity admits for solutions that have naked singularities, but it was believed that these do not form in realistic systems because they require special initial conditions which are never to be found in nature. However, today several physically realistic situations are known to result in naked singularities. Now that we cannot rule out naked singularities on theoretical grounds, we are left to wonder how we could detect them if they exist for real. And if this means strong curvature regions are within sight, what is the potential for observational evidence of quantum gravity?

It turns out these questions are more difficult to answer than you’d expect. Evidence for black hole horizons comes primarily from not seeing evidence of the surface of a compact object. A naked singularity however also doesn’t have a hard surface, so these observations are not of much use. If matter collapses and heats up, it makes a difference for the emitted radiation whether a horizon forms or not. This difference however is so small that it cannot be detected.

This has lead researchers to look for other ways to distinguish between a black hole and a naked singularity. For example by asking how a naked singularity would act as a gravitational lens in comparison to a black hole. However, the timelike naked singularities considered in this work is not of the type that has shown to be created in physically realistic collapse.

The so far most promising study is a recent paper by a group of physicists located in Morelia, Mexico
    Observational distinction between black holes and naked singularities: the role of the redshift function
    Néstor Ortiz, Olivier Sarbach, Thomas Zannias
    arXiv:1401.4227 [gr-qc]
In this paper, the authors have studied if one can distinguish between black holes and naked singularities not by the light that is emitted from the object itself during collapse but by light from a different source that travels through the collapse region. They find that the luminosity curves of the two cases differ on a timescale that, for a stellar black hole, is about 10-5s. In this work the authors do not evaluate if it is feasible to detect the difference with presently existing technology, but the signal does not seem hopelessly small.

The space-times that are considered in the above have a Cauchy-horizon, which is an interesting but also somewhat troubling concept which the cosmic censorship conjecture is supposed to avoid. The presence of the Cauchy-horizon basically means that after a certain moment in time you need additional initial data. You could interpret this as a classical instance of indeterminism. However, quantum gravity is generally expected to remove the singularity anyway, so don’t get too much of a headache over this. More interesting is the question if not the difference between the presence and absence of the horizon would be easier to detect if quantum gravitational effects were taken into account.

I am sure we will hear more about this in the soon future. Maybe we’ll even see it.

Monday, March 17, 2014

Do scientists deliberately use technical expressions so they cannot be understood?

Secret handshake?
Science or gibberish?
“[E]xisting pseudorandom and introspective approaches use pervasive algorithms to create compact symmetries. The development of interrupts would greatly amplify Byzantine fault tolerance. We construct a novel method for the investigation of online algorithms.”

“[T]he effective diminution of the relevant degrees of freedom in the ultraviolet (on which morally speaking all approaches agree) is interpreted as universality in the statistical physics sense in the vicinity of an ultraviolet renormalization group fixed point. The resulting picture of microscopic geometry is fractal-like with a local dimensionality of two.”
IEEE and Springer recently withdrew 120 papers that turned out to be random generated nonsense and Schadenfreude spread among the critics of commercial academic publishing. The internet offers a wide variety of random text generators, including the one used to create the now withdrawn Springer papers, called SciGen. The difficult part of creating random academic text is the grammar, not the vocabulary. If you start with a grammatically correct sentence it is easy enough to fill in technical language.

Take as example the above sentence
“The difficult part of creating random text is the grammar, not the vocabulary.”
And just replace some nouns and adverbs:
“The difficult part of creating completely antisymmetric turbulence is the higher order correction, not the parametric resonance.”
Or maybe
“The difficult part of creating parametric turbulence is the completely antisymmetric resonance, not the higher order correction.”
Sounds very educated, yes? I have some practice with that ;o)The problem is that if you don’t know the technical terms you can’t tell if the relations implied by the grammar make sense. There is thus, not so surprisingly, a long history of cynics abusing this narrow target group of academic writing, and this cynicism spreads rapidly now that academic writing has become more widely available. With the open access movement there swells the background choir chanting that availability isn’t the same as accessibility. Nicholas Kristof recently complained about academic writing in an NYT op-ed:
“[A]cademics seeking tenure must encode their insights into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.”
Kristof calls upon academics to better communicate with the public, which I certainly support. At the same time however he also claims professional language is unnecessary and deliberately exclusive:
“Ph.D. programs have fostered a culture that glorifies arcane unintelligibility while disdaining impact and audience. This culture of exclusivity is then transmitted to the next generation through the publish-or-perish tenure process.”
Let me take these two issues apart. First deliberately exclusive, and second unnecessary.

Steve Fuller, who is a professor for Social Epistemology at the University of Warwick, argues (for example in his book “Knowledge Management Foundations”) that the value of knowledge is related to the scarcity of access to it. For that reason, academics have an incentive to put hurdles in the way of those wanting to get into the ivory tower and make it more difficult than it has to be. It is a good argument, though it is hard to tell how much of this exclusivity is deliberate. At least when it comes to my colleagues in math and physics, the exclusivity seems more a matter of neglect than of intent. Inclusivity takes effort and most academics don’t make this effort.

This brings me to the argument that academic slang is unnecessary. Unfortunately, this is a very common belief. For example, in reaction to my recent post about the tug-of-war between accuracy and popularity in science journalism, several journalists remarked that surely I must have meant precision rather than accuracy, because good journalism can be accurate even though it avoids technical language.

But no, I did in fact mean accuracy. If you don’t use the technical language, you’re not accurate. The whole raison d’être [entirely unnecessary French expression meaning “reason for existence”] of professional terminology is that it is the most accurate description available. And PhD programs don’t “glorify unintelligible gibberish”, they prepare students to communicate accurately and efficiently with their colleagues.

For physicists the technical language is equations, the most important ones carry names. If you want to avoid naming the equation, you inevitably lose accuracy.

The second Friedmann equation, for example, does not just say the universe undergoes accelerated expansion with the present values of dark matter and dark energy, which is a typical “non-technical” description of this relation. The equation also tells you that you’re dealing with a differentiable, metric manifold of dimension 4 and Lorentzian signature and are within Einstein’s theory of general relativity. It tells you that you’ve made an assumption of homogeneity and isotropy. It tells you exactly how the acceleration relates to the matter content. And constraining the coupling constants for certain Lorentz-invariance violating operators of order 5 is not the same as testing “space-time graininess” or testing whether the universe is a computer simulation, to just name some examples.

These details are both irrelevant and unintelligible for the average reader of a pop sci article, I agree. But, I insist, without these details the explanation is not accurate, and not useful for the professional.

Technical terminology is an extremely compressed code that carries a large amount of information for those who have learned to decipher it. It is used in academia because without compression nobody could write, let alone read, a paper. You’d have to attach megabytes worth of textbooks, lectures and seminars.

In science, most terms are cleanly defined, others have various definitions and some I admit are just not well-defined. In the soft sciences, the situation is considerably worse. In many cases trying to pin down the exact meaning of an -ism or -ology opens a bottomless pit of various interpretations and who-said-whats that date back thousands of years. This is why my pet peeve is to discard soft science arguments as useless due to undefined terminology. However, one can’t really blame academics in these disciplines – they are doing the best they can building castles on sand. But regardless of whether their terminology is very efficient or not compared to the hard sciences, it too is used for the sake of compression.

So no, academic slang is not unnecessary. But yes, academic language is exclusive as a consequence of this. It is in that not different from other professions. Just listen to your dentist and her assistant discuss their tools and glues, or look at some car-fanatics forum, and you’ll find the same exclusivity there. The difference is gradual and lies in the amount of time you need to invest to be one of them, to learn their language.

Academic language is not purposefully designed to exclude others, but it arguably serves this purpose once in place. Pseudoscientists tend to underestimate just how obvious their lack of knowledge is. It often takes a scientist not more than a sentence to recognize an outsider as such. Are you be able to tell the opening sentences of this blogpost from gibberish? Can you tell the snarxiv from the arxiv?

Indeed, it is in reality not the PhD that marks the science-insider from the outsider. The PhD defense is much like losing your virginity, vastly overrated. It looms big in your future, but once in the past you note that nobody gives a shit. You mark your place in academia not by hanging a framed title on your office door, but by using the right words at the right place. Regardless of whether you do have a PhD, you’ll have to demonstrate the knowledge equivalent of a PhD to become an insider. And there’s no shortcuts to this.

For scientists this demarcation is of practical use because it saves them time. On the flipside, there is the occasional scientist who goes off the deep end and who then benefits from having learned the lingo to make nonsense sound sophisticated. However, compared to the prevalence of pseudoscience this is a rare problem.

Thus, while the exclusivity of academic language has beneficial side effects, technical expressions are not deliberately created for the purpose of excluding others. They emerge and get refined in the community as efficient communication channels. And efficient communication inside a discipline is simply not the same as efficient communication with other disciplines or with the public, a point that Kristof in his op-ed is entirely ignoring. Academics are hired and get paid for communicating with their colleagues, not with the public. That is the main reason academic writing is academic. There is probably no easy answer to just why it has come to be that academia doesn’t make much effort communicating with the public. Quite possibly Fuller has a point there in that scarcity of access protects the interests of the communities.

But leaving aside the question of where the problem originates, at prima facie [yeah, I don’t only know French, but also Latin] the reason most academics are bad at communicating with the public is simple: They don’t care. Academia presently very strongly selects for single-minded obsession with research. Communicating with the public, about one’s own research or to chime in with opinions on scientific policy, it is in the best case useless in the worst case harmful to do the job that pays their rent. Accessibility and popularity does for academics not convert into income, and even an NYT Op-Ed isn’t going to change anything about this. The academics you find in the public sphere are primarily those who stand to benefit from the limelight: Directors and presidents of something spreading word about their institution, authors marketing their books, and a few lucky souls who found a way to make money with their skills and gigs. You do not find the average academic making an effort to avoid academic prose because they have nothing to gain with that.

I’ve read many flowery words about how helpful science communication – writing for the public, public lectures, outreach events, and so on – can be to make oneself and one’s research known. Yes, can be, and anecdotally this has helped some people find good jobs. But this works out so rarely that on the average it is a bad investment of time. That academics are typically overworked and underpaid anyway doesn’t help. That’s not good, but that’s reality.

I certainly wish more academics would engage with the public and make that effort of converting academic slang to comprehensible English, but knowing how hard my colleagues work already, I can’t blame them for not doing so. So please stop complaining that academics do what they were hired to do and that they don’t work for free on what doesn’t feed their kids. If you want more science communication and less academic slang, put your money where your mouth is and pay those who make that effort.

The first of the examples at the top of this post is random nonsense generated with SciGen. The second example is from the introduction of the Living Review on Asymptotic Safety. Could you tell?

Wednesday, March 12, 2014

What is asymptotically safe gravity and what does it save?

Infinite Grid by Georg Koch.
Einstein’s theory of general relativity, which describes gravity as the curvature of space-time, stands apart from the other interactions by its refusal to be quantized. Or so you have almost certainly read somewhere. But it’s not true.

To begin with quantizing gravity isn’t all that difficult. Gravity can and has been quantized much like the other interactions by promoting gravitational waves to quantum waves. That is called perturbative quantization and technically somewhat annoying but perfectly doable. The problem starts only after this because the so quantized theory does not make sense anymore when gravity becomes strong. It delivers infinities as results, no matter what, and that makes it not only useless but also meaningless as a fundamental theory.

You might shrug shoulders on the infinites because you have also probably heard that quantum field theory has a problem with infinities anyways. But that too is not true...

Yes, the occurrence of infinite results was historically a big issue. But quantum field theory development hasn’t stopped in the 1930s and we know today very well how to do these calculations. Whenever you get an infinite result, you need to use a measurement to fix a physical parameter. We call it renormalization and it’s a mathematically clean procedure.

However, if you need an infinite amount of measurements to fix parameters, then you have a real problem because now your theory is no longer predictive: You need infinitely many measurements before you can make a prediction about the next measurement. A theory with that problem is said to be perturbatively non-renormalizable and on that diagnosis most physicists will not resuscitate the patient. Perturbatively quantized gravity has exactly this disease.

As long as gravity is weak, for all practical purposes you need only a finite amount of parameters to get to a desired precision. You can use the theory there. But once gravity gets strong, the theory becomes useless. This means it is not a candidate for a fundamental theory.

The use of quantum field theories and their properties thus depend on the energy scale. Interactions can for example become weaker or stronger depending on the energy of the interaction. Quantum Chromodynamics famously becomes weaker at high energies; it is “asymptotically free” and that insight was worth a Nobel Prize in 2004.

But how theories in general depend on the energy scale has only been really understood within in the last two decades or so. It has been a silent development that almost entirely passed by the popular science press and goes under the name renormalization group flow. The renormalization group flow encodes how a theory depends on the energy scale, and it is at the basis of the idea of effective field theory.

The dependence of a quantum field theory on the energy scale that is used to probe structures is much like Ted Nelson’s idea of the stretch-text, a text in which you can zoom or click into layers of more detail. The closer you look, the more new features, new insights, new information you get to see. It’s the same with quantum field theory. The closer you look, the more new layers you get to see.

Based on this, Weinberg realized in 1976 that perturbative renormalizability is not the only way for a theory to remain meaningful at high energies. It is sufficient if, at high energies (technically: infinitely high), you need to fix only a finite number of parameters. And none of these parameters should become infinite itself in that limit. These two requirements: A finite number of finite parameters that determine the theory at high energies are what make a theory asymptotically safe.

This then raises the question of whether quantum gravity, though perturbatively nonrenormalizable, might be asymptotically safe and meaningful after all. That this might be so is the idea behind “asymptotically safe gravity”. While the general idea has been around for almost four decades, it has only been in the late 1990s, following works by Wetterich and Reuter, that asymptocially safe gravity has caught on.

As of today, it has not been proved that gravity is asymptotically safe, though there are several arguments that support this idea. The problem is that doing calculations in an infinite dimensional theory space is not possible, so this space has to be reduced. But then the result can only deliver a limited level of knowledge. The other problem is that even if the theory is asymptotically safe, it might be physically nonsensical at high energies for other reasons.

Another criticism on asymptotically safe gravity has been that it does not seem to take into account that space-time fundamentally might be described by degrees of freedom different from those used in general relativity. While that arguably is so in existing approaches, the idea of renormalization group flow is in principle perfectly compatible with changing to different – more ‘fundamental’ – degrees of freedom at high energies, as Percacci and Vacca have pointed out.

That is to say, this approach towards quantum gravity has its problems, its friends and its foes, as has every other approach towards quantum gravity. But it is a strong competitor. What makes this approach so appealing is its minimalism: Maybe quantum gravity makes sense as a quantum field theory after all! Depending on your attitude though you might find exactly this minimalism unappealing. It’s like at the end of a crime novel the murder victim comes back from vacation and everybody feels stupid for their conspiracy theories.

Whatever your attitude, asymptotically safe gravity has made some contact to phenomenology, mostly in the area of cosmology, though for all I know these studies haven’t yet resulted in a good observable. Most interestingly, asymptotically safe gravity has been shown to also lead to dimensional reduction and it has recently been argued that it might be related to Causal Dynamical Triangulation. It seems to me that whatever quantum gravity ultimately looks like, asymptotically safe gravity will almost certainly be part of the story.

So the next time somebody tells you that we don’t know how to quantize gravity, keep in mind the many layers of details underneath that statement.

Tuesday, March 04, 2014

10 Misconceptions about Creativity

Lara, painting. She says
it's a snake and a trash can.

The American psyche is deeply traumatized by the finding that creativity scores of children and adults have been constantly declining since 1990. The consequence is a flood of advice on how to be more creative, books and seminars and websites. There’s no escaping the message: Get creative, now!

Science needs a creative element, and so every once in a while I read these pieces that come by my newsfeed. But they’re like one of these mildly pleasant songs that stop making sense when you listen to the lyrics. Clap your hands if you’re feeling like a room without a ceiling.

It’s not like I know a terrible lot about research on creativity. I’m sure there must be some research on it, right? But most of what I read isn’t even logically coherent.
  1. Creativity means solving problems.

    The NYT recently wrote in an article titled “Creativity Becomes an Academic Discipline”:
    “Once considered the product of genius or divine inspiration, creativity — the ability to spot problems and devise smart solutions — is being recast as a prized and teachable skill.”
    Yes, creativity is an essential ingredient to solving problems, but equating creativity with problem solving is like saying curiosity is a device to kill cats. It’s one possible use, but it’s not the only use and there are other ways to kill cats.

    Creativity is in the first place about creation, the creation of something new and interesting. The human brain has two different thought processes to solve problems. One is to make use of learned knowledge and proceed systematically step by step. This is often referred to as ‘convergent thinking’ and dominantly makes use of the left side of the brain. The other process is a pattern-finding, a free association, often referred to as ‘divergent thinking’ which employs more brain regions on the right side. It normally kicks in only if the straight-forward left-brain attempt failed because it’s energetically more costly. Exactly what constitutes creative thinking is not well known, but most agree it is a combination of both of these thought processes.

    Creative thinking is a way to arrive at solutions to problems, yes. Or you might create a solution looking for a problem. Creativity is also an essential ingredient to art and knowledge discovery, which might or might not solve any problem.

  2. Creativity means solving problems better.

    It takes my daughter about half an hour to get dressed. First she doesn’t know how to open the buttons, then she doesn’t know how to close them. She’ll try to wear her pants as a cap and pull her socks over the jeans just to then notice the boots won’t fit.

    It takes me 3 minutes to dress her – if she lets me – not because I’m not creative but because it’s not a problem which calls for a creative solution. Problems that can be solved with little effort by a known algorithm are in most cases best solved by convergent thinking.

    Xkcd nails it:

    But Newsweek bemoans:
    “Preschool children, on average, ask their parents about 100 questions a day. Why, why, why—sometimes parents just wish it’d stop. Tragically, it does stop. By middle school they’ve pretty much stopped asking.”
    There’s much to be said about schools not teaching children creative thinking – I agree it’s a real problem. But the main reason children stop asking question is that they learn. And somewhat down the line they learn how to find answers themselves. The more we learn, the more problems we can address with known procedures.

    There’s a priori nothing wrong with solving problems non-creatively. In most cases creative thinking just wastes time and brain-power. You don’t have to reinvent the wheel every day. It’s only when problems do not give in to standard solutions that a creative approach becomes useful.

  3. Happiness makes you creative.

    For many people the problem with creative thought is the lack of divergent thinking. If you look at the advice you find online, they’re almost all guides to divergent thinking, not to creativity: “Don’t think. Let your thoughts unconsciously bubble away.” “Sourround yourself with inspiration”Be open and aware. Play and pretend. List unusual uses for common household objects.” And so on. Happiness then plays a role for creativity because there is some evidence that happiness makes divergent thinking easier:
    “Recent studies have shown […] that everyday creativity is more closely linked with happiness than depression. In 2006, researchers at the University of Toronto found that sadness creates a kind of tunnel vision that closes people off from the world, but happiness makes people more open to information of all kinds.”
    Writes Bambi Turner who has a business degree and writes stuff. Note the vague term “closely linked” and look at the research.

    It is a study showing that people who listened to Bach’s (“happy”) Brandenburg Concerto No. 3 were better solving a word puzzle that required divergent thinking. In science speak the result reads “positive affect enhanced access to remote associates, suggesting an increase in the scope of semantic access.” Let us not even ask about the statistical significance of a study with 24 students of the University of Toronto in their lunch break, or its relevance for real life. The happy people participating this study were basically forced to think divergently. In real life happiness might instead divert you from hacking on a problem.

    In summary, the alleged “close link” should read: There is tentative evidence that happiness increases your chances of being creative in a laboratory setting, if you are among those who lack divergent thinking and are student at the University of Toronto.

  4. Creativity makes you happy.

    There’s very little evidence that creativity for the sake of creativity improves happiness. Typically it’s arguments of plausibility like this that solving a problem might improve your life generally:
    “creativity allows [people] to come up with new ways to solve problems or simply achieve their goals.”
    That is plausible indeed, but it doesn’t take into account that being creative has downsides that counteract the benefits.

    This blog is testimony to my divergent thinking. You might find this interesting in your news feed, but ask my husband what fun it is to have a conversation with somebody who changes topic every 30 seconds because it’s all connected! I’m the nightmare of your organizing committee, of your faculty meeting, and of your carefully assembled administration workflow. Because I know just how to do everything better and have ten solutions to every problem, none of which anybody wants to hear. It also has the downside that I can only focus on reading when I’m tired because otherwise I’ll never get though a page. Good thing all my physics lectures were early in the morning.

    Thus, I am very skeptic of the plausibility argument that creativity makes you happy. If you look at the literature, there is in fact very little that has shown to lastingly increase people’s happiness at all. Two known procedures that have proved some effect in studies is showing gratitude and getting to know ones’ individual strengths.

    For more evidence that speaks against the idea that creativity increases happiness, see 7 and 8. There is some evidence that happiness and creativity are correlated, because both tend to be correlated with other character traits, like openness and cognitive flexibility. However, there is also evidence to the contrary, that creative people have a tendency to depression: “Although little evidence exists to link artistic creativity and happiness, the myth of the depressed artist has some scientific basis.” I’d call this inconclusive. Either way, correlations are only of so much use if you want to actively change something.

  5. Creativity will solve all our problems.

    “All around us are matters of national and international importance that are crying out for creative solutions, from saving the Gulf of Mexico to bringing peace to Afghanistan to delivering health care. Such solutions emerge from a healthy marketplace of ideas, sustained by a populace constantly contributing original ideas and receptive to the ideas of others.”
    [From Newsweek again.] I don’t buy this at all. It’s not that we lack creative solutions, just look around, look at TED if you must. We’re basically drowning in creativity, my inbox certainly is. But they’re solutions to the wrong problems.

    (One of the reasons is that we simply do not know what a “healthy marketplace of ideas” is even supposed to mean, but that’s a different story and shell be told another time.)

  6. You can learn to be creative if you follow these simple rules.

    You don’t have to learn creative thinking, it comes with your brain. You can however train it if you want to improve, and that’s what most of the books and seminars want to sell. It’s much like running. You don’t have to learn to run. Everybody who is reasonably healthy can run. How far and how fast you can run depends on your genes and on your training. There is some evidence that creativity has a genetic component and you can’t do much about this. However, you can work on the non-genetic part of it.

  7. “To live creatively is a choice.”

    This is a quote from the WSJ essay “Think Inside the Box.” I don’t know if anybody ever looked into this in a scientific way, it seems a thorny question. But anecdotally it’s easier to increase creativity than to decrease it and thus it seems highly questionable that this is correct, especially if you take into account the evidence that it’s partially genetic. Many biographies of great writers and artists speak against this, let me just quote one:
    “We do not write because we want to; we write because we have to.”
    W. Somerset Maugham, English dramatist and novelist (1874 - 1965).

  8. Creativity will make you more popular.

    People welcome novelty only in small doses and incremental steps. The wilder your divergent leaps of imagination, the more likely you are to just leave people behind you. Creativity might be a potential source for popularity in that at least you have something interesting to offer, but too much of it won’t do any good. You’ll end up being the misunderstood unappreciated genius whose obituary says “ahead of his times”.

  9. Creativity will make you more successful.

    Last week, the Washington post published this opinion piece which informs the reader that:
    “Not for centuries has physics been so open to metaphysics, or more amenable to an ancient attitude: a sense of wonder about things above and within.”
    This comes from a person named Michael Gerson who recently opened Max Tegmark’s book and whose occupation seems to be, well, to write opinion pieces. I’ll refrain from commenting on the amenability of professions I know nothing about, so let me just say that he has clearly never written a grant proposal. I warmly recommend you put the word “metaphysics” into your next proposal to see what I mean. I think you should all do that because I clearly won’t, so then maybe I stand a chance then in the next round.

    Most funding agencies have used the 2008 financial crisis as an excuse to focus on conservative and applied research to the disadvantage of high risk and basic research. They really don’t want you to be creative – the “expected impact” is far too remote, the uncertainty too high. They want to hear you’ll use this hammer on that nail and when you’ve been hitting at it for 25 months and two weeks, out will pop 3 papers and two plenary talks. Open to metaphysics? Maybe Gerson should have a chat with Tegmark.

    There is indeed evidence showing that people are biased against creativity to the favor of practicality, even if they state they welcome creativity. This study relied on 140 American undergraduate students. (Physics envy, anybody?) The punchline is that creative solutions by their very nature have a higher risk of failure than those relying on known methods and this uncertainty is unappealing. It is particularly unappealing when you are coming up with solutions to problems that nobody wanted you to solve.

    So maybe being creative will make you successful. Or maybe your ideas will just make everybody roll their eyes.

  10. The internet kills creativity.

    The internet has made life difficult for many artists, writers, and self-employed entrepreneurs, and I see a real risk that this degrades the value of creativity. However, it isn’t true that the mere availability of information kills creativity. It just moves it elsewhere. The internet has made many tasks that previously required creative approaches to step-by-step procedures. Need an idea for a birthday cake? Don’t know how to fold a fitted sheet? Want to know how to be more creative? Google will tell you. This frees your mind to get creative on tasks that Google will not do for your. In my eyes, that’s a good thing. 
So should you be more creative?

My summary of reading all these articles is that if you feel like your life lacks something, you should take score of your strengths and weaknesses and note what most contributes to your well-being. If you think that you are missing creative outlets, by all means, try some of these advice pages and get going. But do it for yourself and not for others, because creativity is not remotely as welcome as they want you to believe.

On that note, here’s the most recent of my awesomely popular musical experiments: