Sunday, August 31, 2014

The ordinary weirdness of quantum mechanics

Raymond Laflamme's qubit.
Photo: Christina Reed.
I’m just back from our 2014 Workshop for Science Writers, this year on the topic “Quantum Theory”. The meeting was both inspiring and great fun - the lab visit wasn’t as disorganized as last time, the muffins appeared before the breaks and not after, and amazingly enough we had no beamer fails. We even managed to find a video camera, so hopefully you’ll be able to watch the lectures on your own once uploaded, provided I pushed the right buttons.

Due to popular demand, we included a discussion session this year. You know that I’m not exactly a big fan of discussion sessions, but then I didn’t organize this meeting for myself. Michael Schirber volunteered to moderate the discussion. He started with posing the question why quantum mechanics is almost always portrayed as spooky, strange or weird. Why do we continue to do this and is beneficial for communicating the science behind the spook?

We could just blame Einstein for this, since he famously complained that quantum mechanics seemed to imply a spooky (“spukhafte”) action at a distance, but that was a century ago and we learned something since. Or some of us anyway.

Stockholm's quantum optics lab,
Photo: Christina Reed.
We could just discard it as headline making, a way to generate interest, but that doesn’t really explain why quantum mechanics is described as weirder or stranger as other new and often surprising effects. How is time-dilatation in a gravitational field less strange than entanglement? And it’s not that quantum mechanics is particularly difficult either. As Chad pointed out during the discussion, much of quantum mechanics is technically much simpler than general relativity.

We could argue it is due to our daily life being dominated by classical physics, so that quantum effects must appear unintuitive. Intuition however is based on experience and exposure. Spend some time calculating quantum effects, spend some time listening to lectures about quantum mechanics, and you can get that experience. This does not gain you the ability to perceive quantum effects without a suitable measuring device, but that is true for almost everything in science.

The explanation that came up during the discussion that made the most sense to me is that it’s simply a way to replace technical vocabulary, and these placeholders have become vocabulary on their own right.

The spook and the weirdness, they stand in for non-locality and contextuality, they replace correlations and entanglement, pure and mixed states, non-commutativity, error correction, path integrals or post-selection. Unfortunately, all too often the technical vocabulary is entirely absent rather than briefly introduced. This makes it very difficult for interested readers to dig deeper into the topic. It is basically a guarantee that the unintuitive quantum behavior will remain unintuitive for most people. And for the researchers themselves, the lack of technical terms makes it impossible to figure out what is going on. The most common reaction to supposed “quantum weirdness” that I see among my colleagues is “What’s new about this?”

The NYT had a recent opinion piece titled “Why We Love What We Don’t Understand” in which Anna North argued that we like that what isn’t understood because we want to keep the wonder alive:
“Many of us may crave that tug, the thrill of something as-yet-unexplained… We may want to get to the bottom of it, but in another way, we may not — as long as we haven’t quite figured everything out, we can keep the wonder alive.”
This made me think because I recall browsing through my mother’s collection of (the German version of) Scientific American as a teenager, always looking to learn what the scientists, the big brains, did not know. Yeah, it was kinda predictable I would end up in some sort of institution. At least it’s one where I have a key to the doors.

Anyway, I didn’t so much want to keep the mystery alive as that I wanted to know where the boundary between knowledge and mystery was currently at. Assume for a moment I’m not all that weird but most likely average. It is surprising then that the headline-grabbing quantum weirdness, instead of helping the reader, misleads them about where this boundary between knowledge and mystery is? Is it surprising then that everybody and their dog has solved some problem with quantum mechanics without knowing what problem?

And is it surprising, as I couldn’t help noticing, that the lecturers at this year’s workshop were all well practiced in forward-defense, and repeatedly emphasized that most of the theory is extremely well understood. It’s just that the focus on new technics and recent developments highlights exactly that what isn’t (yet) well understood, thereby giving more weight to the still mysterious in the news than there is in the practice.

I myself do not mind the attention-grabbing headlines, and that news focus on that what’s new rather than that what’s been understood for decades is the nature of the business. As several science writers, at this workshop and also at the previous one, told me, it is often not them inventing the non-technical terms, but it is vocabulary that the scientists themselves use to describe their research. I suspect though the scientists use it trying to adapt their explanations to the technical level they find in the popular science literature. So who is to blame really and how do we get out of this loop?

A first step might be to stop assuming all other parties are more stupid than the own. Most science writers have some degree in science, and they are typically more up to date on what is going on in research than the researchers themselves. The “interested public” is perfectly able to deal with some technical vocabulary as long as it comes with an explanation. And researchers are not generally unwilling or unable to communicate science, they just often have no experience what is the right level of detail in situations they do not face every day.

When I talk to some journalist, I typically ask them first to tell me roughly what they already know. From their reply I can estimate what background they bring, and then I build on that until I notice I lose them. Maybe that’s not a good procedure, but it’s the best I’ve come up with so far.

We all can benefit from better science communication, and a lot has changed within the last decades. Most notably, there are many more voices to hear now, and these voices aim at very different levels of knowledge. What is still not working very well though is the connection between different levels of technical detail. (Which we previously discussed here.)

At the end of the discussion I had the impression opinions were maximally entangled and pure states might turn into mixed ones. Does that sound strange?

Monday, August 25, 2014

Name that Þing

[Image credits Ria Novosti, source]
As teenager I switched between the fantasy and science fiction aisle of the local library, but in the end it was science fiction that won me over.

The main difference between the genres seemed the extent to which authors bothered to come up with explanations. The science fiction authors, they bent and broke the laws of Nature but did so consistently, or at least tried to. Fantasy writers on the other hand were just too lazy to work out the rules to begin with.

You could convert Harry Potter into a science fiction novel easily enough. Leaving aside gimmicks such as moving photos that are really yesterday’s future, call the floo network a transmitter, the truth serum a nanobot liquid, and the invisibility cloak a shield. Add some electric buzz, quantum vocabulary, and alien species to it. Make that wooden wand a light saber and that broom an X-wing starfighter, and the rest is a fairly standard story of the Other World, the Secret Clan, and the Chosen One learning the rules of the game and the laws of the trade, of good and evil, of friendship and love.

The one thing that most of the fantasy literature has which science fiction doesn’t have, and which has always fascinated me, is the idea of an Old Language, the idea that there is a true name for every thing and every place, and if you know the true name you have power over it. Speaking in the Old Language always tells the truth. If you speak the Old Language, you make it real.

This idea of the Old Language almost certainly goes back to our ancestor’s fights with an often hostile and unpredictable nature threatening their survival. The names, the stories, the gods and godzillas, they were their way of understanding and managing the environment. They were also the precursor to what would become science. And don’t we in physics today still try to find the true name of some thing so we have power over it?

Aren’t we still looking for the right words and the right language? Aren’t we still looking for the names to speak truth to power, to command that what threatens us and frightens us, to understand where we belong, where we came from, and where we go to? We call it dark energy and we call it dark matter, but these are not their true names. We call them waves and we call them particles, but these are not their true names. Some call the thing a string, some call it a graph, some call it a bit, but as Lee Smolin put it so nicely, none of these words quite has a “ring of truth” to it. These are not the real names.

Neil Gaiman’s recent fantasy novel “The Ocean at the End of the Road” also draws on the idea of an Old Language, of a truth below the surface, a theory of everything which the average human cannot fathom because they do not speak the right words. In Michael Ende’s “Neverending Story” that what does not have a true name dies and decays to nothing. (And of course Ende has a Chosen One saving the world from that no-thing.) It all starts and it all ends with our ability to name that what we are part of.

You don’t get a universe from nothing of course. You can get a universe from math, but the mathematical universe doesn’t come from nothing either, it comes from Max Tegmark, that is to say some human (for all I can tell) trying to find the right words to describe, well, everything - no point trying to be modest about it. Tegmark, incidentally, also seems to speak at least ten different languages or so, maybe that’s not a coincidence.

The evolution of language has long fascinated historians and neurologists alike. Language is more than assigning a sound to things and things you do with things. Language is a way to organize thought patterns and to classify relations, if in a way that is frequently inconsistent and often confusing. But the oldest language of all is neither Sindarin nor Old Norse, it is, for all we can tell, the language of math in which the universe was written. You can call it temperature anisostropy, or tropospheric ozone precursors, you can call it neurofibrillary tangle or reverse transcriptase, you can call them Bárðarbunga or Eyjafjallajökull - in the end their true names were written in math.

Friday, August 22, 2014

Hello from Iceland

So here I am on an island in the middle of the Atlantic ocean that's working on its next volcano eruption.


In case you missed yesterday's Google Hangout, FQXi just announced the winner's of this year's essay contest and - awesomeliness alert! - my essay "How to save the world in five simple steps" made it first prize!

I'm happy of course about the money, but what touches me much more is that this is vivid documentation I'm not the only one who thinks the topics I addressed in my essay are relevant. If you've been following this blog for some while then you know of course that I've been thinking back and forth about the problem of emerging social dynamics, in the scientific communities as well as in society by large, and our inability to foresee and react to the consequences of our actions.

Ten years ago I started out thinking the problem is the modeling of these systems, but over the years, as more and more research and data on these trends became available, I've become convinced the problem isn't understanding the system dynamics to begin with, but that nobody is paying attention to what we've learned.

I see this every time I sit in a committee meeting and try to tell them something about research dedicated to intelligent decision making in groups, cognitive biases, or the sociology of science. They'll not listen. They might be polite and let me finish, but it's not information they will take into account in their decision making. And the reason is basically that it takes them too much time and too much effort. They'll just continue the way it's always been done; they'll continue making the same mistakes over again. There's no feedback in this system, and no learning by trial and error.

The briefest of brief summaries of my essay is that we'll only be able to meet the challenges mankind is facing if our social systems are organized so that we can react to complex and emerging problems caused by our own interaction and that with our environment. That will only be possible if we have the relevant information and use it. And we'll only use this information if it's cheap, in the sense of it being simple, fast, and intuitive to use.

Most attempts to solve the problems that we are facing are based on an unrealistic and utopian image of the average human, the well-educated, intellectual and concerned citizen who will process all available information and come to smart decisions. That is never going to happen, and that's the issue I'm taking on in my essay.

I'll be happy to answer questions about my essay. I would prefer to do this here rather than at the FQXi forum. Note though that I'll be stuck in transit for the next day. If that volcano lets me off this island that is.

Monday, August 18, 2014

DAMA annual modulation explained without invoking dark matter

Annual modulation of DAMA data.
Image credits: DAMA Collaboration.
Physicists have plenty evidence for the existence of dark matter, matter much like the one we are made of but that does not emit any light. However, so far all this evidence comes from the gravitational pull of dark matter, which affects the motion of stars, the formation of structures, and acts as a gravitational lens to bend light, all of which has been observed. We still do not know however what the microscopic nature of dark matter is. What is the type of particle (particles?) that it is constituted of, and what are its interactions?

Few physicists today doubt that dark matter exists and is some type of particle which has just evaded detection so far. First, there is all the evidence for its gravitational interaction. Add to this that we don’t know any good reason why all matter should couple to photons, and on this ground we can actually expect the existence of dark matter. Moreover, we have various candidate theories for physics beyond the standard model that contain particles which fulfil the necessary properties for dark matter. Finally, alternative explanations, by modifying gravity rather than adding a new type of matter, are disfavored by the existing data.

Not so surprisingly thus, dark matter has come to dominate the search for physics beyond the standard model. We seem to be so very close!

Infuriatingly though, despite many experimental efforts, we still have no evidence for the interaction of dark matter particles, neither among each other nor with the matter that we are made of. Many experiments are searching for evidence of these interactions. It is the very nature of dark matter – it interacting so weakly with our normal matter and with itself – which makes finding evidence so difficult.

One observation being looked for is decay products of dark matter interactions in astrophysical processes. There are presently several observations, such as the Fermi γ-ray excess or the positron excess, whose astrophysical origin is not presently understood and so could be due to dark matter. But astrophysics combines a lot of processes at many energy and density scales, and it is hard to exclude that some signal was not caused by particles of the standard model alone.

Another type of evidence that is being sought after comes from experiments designed to be sensitive to the very rare interaction of dark matter with our normal matter when it passes through the planet. These experiments have the advantage that they happen in a known and controlled environment (as opposed to somewhere in the center of our galaxy). They experiments are typically located deep underground in old mines to filter out unwanted types of particles, collectively referred to as “background”. Whether or not an experiment can detect dark matter interactions within a certain amount of time depends on the density and coupling strength of dark matter, and so also on the type of detector material.

So far, none of the dark matter searches has resulted in a statistically significant positive signal. They have set constraints on the coupling and density of dark matter. Valuable, yes, but frustrating nevertheless.

One experiment that has instilled both hope as well as controversy among physicists is the DAMA experiment. The DAMA experiment sees an unexplained annual modulation in the event rate at high statistical significance. If the signal was caused by dark matter, we would expect an annual modulation due to our celestial motion around the Sun. The event rate depends on the orientation of the detector relative to our motion and should peak around June 2nd, consistent with the DAMA data.

There are of course other signals that have an annual modulation that cause reactions with the material in and around the detector. Notably there is the flux of muons which are produced when cosmic rays hit the upper atmosphere. The muon flux however depends on the temperature in the atmosphere and peaks approximately 30 days too late to explain the observations. The DAMA collaboration has taken into account all other kinds of backgrounds that they could think of, or that other physicists could think of, but dark matter remained the best way to explain the data.

The DAMA experiment has received much attention not primarily because of the presence of the signal, but because of the physicists’ failure to explain the signal with anything but dark matter. It adds to the controversy though that the DAMA signal, if due to dark matter, seems to lie in a parameter range already excluded by other dark matter searches. Then again, this may be due to differences in the detectors. The issue has been discussed back and forth for about a decade now.

All this may change now that Jonathan Davis from the University of Durham, UK, in a recent paper demonstrated that the DAMA signal can be fitted by combining the atmospheric muon flux with the flux of solar neutrinos:
    Fitting the annual modulation in DAMA with neutrons from muons and neutrinos
    Jonathan H. Davis
    arxiv:1407.1052
The neutrinos interact with the rock surrounding the detector, thereby creating secondary particles which contribute to the background. The strength of the neutrino signal depends on the Earth’s distance to the sun and peaks around January 2nd. In his paper, Davis demonstrates that for certain values of the amount of muons and neutrinos these two modulations combine to fit the DAMA data very well, as good as a dark matter explanation. And that is after he has corrected the goodness of the fit by taking into account the larger number of parameters.

Moreover, Davis discusses how the two possible explanations could be distinguished from each other, for example by analyzing the data for residual changes in the solar activity that should not be present if the signal was due to dark matter.

Tim Tait, Professor for theoretical particle physicist at the University of California, Irvine, commented that “[This] may be the first self-consistent explanation for DAMA.” Though of course one has to be cautious not to jump to conclusions since Davis’ argument is partly based on estimates for the reaction rate of neutrinos with the rock that has to be confirmed with more qualitative studies. Thomas Dent, a former particle cosmologist now working in gravitational wave data analysis, welcomed Davis’ explanation: “DAMA has been a distraction to theorists for too long.”

This post first appeared July 17, 2014, on Starts With A BANG with the title "How the experiment that claimed to detect dark matter fooled itself".

Thursday, August 14, 2014

Away note and Interna

Lara

I'll be traveling the next three weeks, so please be prepared for little or unsubstantial action on this blog. Next week I'm in Reykjavik for a network meeting on "Holographic Methods and Applications". August 27-29 I'm running the Science Writers Workshop in Stockholm together with George, this year on the topic "Quantum Theory." The first week of September then I'm in Trieste for the 2014 conference on Experimental Search for Quantum Gravity, where I'll be speaking about space-time defects.

Unfortunately, this traveling happens just during the time when our Kindergarten is closed, and so it's quite some stress-test for my dear husband. Since you last heard from Lara and Gloria, they have learned to count, use the swing, and are finally potty trained. They can dress themselves, have given up requesting being carried up the stairs, and we mostly get around without taking along the stroller. Yes, life has become much easier. Gloria however still gets motion sick in the car, so we either have to drug her or pull over every 5 minutes. By and large we try to avoid long road trips.

The girls have now more of a social life than me, and we basically can't leave the house without meeting other children that they know and that they have to discuss with whether Friday comes before or after Wednesday. That Lara and Gloria are twins apparently contributes greatly to their popularity. Every once in a while, when I drop off the kids at Kindergarten, some four foot dwarf will request to know if it's really true that they were together in mommy's tummy and inspect me with a skeptic view. The older children tell me that the sisters are so cute, and then try to pad Gloria's head, which she hates.
Gloria

Gloria is still a little ahead of Lara when it comes to developing new skills. She learned to speak a little earlier, to count a little earlier, was potty trained a little earlier and learned to dress herself a little earlier. Then she goes on to explain Lara what to do. She also "reads" books to Lara, basically by memorizing the stories.

Lara on the other hand is still a little ahead in her physical development. She is still a bit taller and more often than not, when I come to pick them up at Kindergarten, Lara will be kicking or throwing some ball while Gloria plays in the sandbox - and afterwards Gloria will insist on taking off her shoes, pouring out the sand and cleaning her socks before she gets into the car. Lara takes off the shoes in the car and pours the sand into the seat pocket. Lara uses her physical advantage over Gloria greatly to take away toys. Gloria takes revenge by telling everybody what Lara did wrong again, like putting her shoe on the wrong foot.

The best recent development is that the girls have finally, after a quite difficult phase, stopped kicking and hitting me and telling me to go away. They now call me "my little mommy" and want me to bake cookies for them. Yes, my popularity has greatly increased with them figuring out that I'm not too bad with cakes and cookies. They don't particularly like my cooking but that's okay, because I don't like it either.

On an entirely different note, as some of you have noticed already, I agreed to write for Ethan Siegel at Starts With A Bang. So far there's two pieces from me over there: How the experiment that claimed to detect dark matter fooled itself and The Smallest Possible Scale in the Universe. The deal is that I can repost what gets published there on this blog after 30 days, which I will do. So if you're only interested in my writing, you're well off here, but check out his site because it's full with interesting physics writing.


Tuesday, August 12, 2014

Do we write too many papers?

Every Tuesday, when the weekend submissions appear on the arXiv, I think we’re all writing too many papers. Not to mention that we work too often on weekends. Every Friday, when another week has passed in which nobody solved my problems for me, I think we’re not writing enough papers.

The Guardian recently published an essay by Timo Hannay, titled “Stop the deluge of science research”, though the URL suggests the original title was “Why we should publish less Scientific Research.” Hannay argues that the literature has become unmanageable and that we need better tools to structure and filter it so that researchers can find what they are looking for. Ie, he doesn’t actually say we should publish less. Of course we all want better boats to stay afloat on the information ocean, but there are other aspects to the question whether we publish too many papers that Hannay didn’t touch upon.

Here, I use “too much” to mean that the amount of papers hinders scientific progress and no longer benefits it. The actual number depends very much on the field and its scientific culture and doesn’t matter all that much. Below I’ve collected some arguments that speak for or against the “too much papers” hypothesis.

Yes, we publish too many papers!
  • Too much to read, even with the best filter. The world doesn’t need to know about all these incremental steps, most of which never lead anywhere anyway.
  • Wastes the time of scientists who could be doing research instead. Publishing several short papers instead of one long one adds the time necessary to write several introductions and conclusions, adapt the paper to different journals styles, fight with various sets of referees, just to then submit the paper to another journal and start all over again.
  • Just not reading them isn’t an option because one needs to know what’s going on. That creates a lot of headache, especially for newcomers. Better only publish what’s really essential knowledge.
  • Wastes the time of editors and referees. Editors and referees typically don’t have access to reports on manuscripts that follow-up works are based on.
No, we don’t publish too many papers!
  • If you think it’s too much, then just don’t read it.
  • If you think it’s too much, you’re doing it wrong. It’s all a matter of tagging, keywords, and search tools.
  • It’s good to know what everybody is doing and to always be up to date.
  • Journals make money with publishing our papers, so don’t worry about wasting their time.
  • Who really wants to write a referee report for one of these 30 pages manuscripts anyway?
Possible reasons that push researchers to publish more than is good for progress:
  • Results pressure. Scientists need published papers to demonstrate outcome of research they received grants for.
  • CV boosting. Lots of papers looks like lots of ideas, at least if one doesn’t look too closely. (Especially young postdocs often believe they don’t have enough papers, so let me add a word of caution. Having too many papers can also work against you because it creates the appearance that your work is superficial. Aim at quality, not quantity.)
  • Scooping angst. In fields which are overpopulated, like for example hep-th, researchers publish anything that might go through just to have a time-stamp that documents they were first.
  • Culture. Researchers adapt the publishing norms of their peers and want to live up to their expectations. (That however might also have the result that they publish less than is good for progress, depending on the prevailing culture of the field.)  
  • PhD production machinery. It’s becoming the norm at least in physics that PhD students already have several publications, typically with their PhD supervisor. Much of this is to make it easier for the students to find a good postdoc position, which again falls back positively on the supervisor. This all makes the hamster wheel turn faster and faster.
All together I don’t have a strong opinion on whether we’re publishing too much or not. What I do find worrisome though is that all these measures for scientific success reduce our tolerance for individuality. Some people write a lot, some less so. Some pay a lot of attention to detail, some rely more on intuition. Some like to discuss and get feedback early to sort out their thoughts, some like to keep their thoughts private until they’ve sorted them out themselves. I think everybody should do their research the way it suits them best, but unfortunately we’re all increasingly forced to publish at rates close to the field average. And who said that the average is the ideal?

Monday, August 11, 2014

When the day comes [video]

Because I know you couldn't dream of anything better than starting your week with one of my awesome music videos. This one is for you, who you just missed another deadline, and for you who you still haven't done what you said you would, and for you, yes you, who you still haven't sent title and abstract.


I'm getting somewhat frustrated with the reverb tails, I think I have to make something less complicated. The background choir is really hard to get in the right place without creating a mush. And as always the video making was quite frustrating. I can't get the cuts in the video being properly in synch with the audio, mainly because I can't see the audio in my video editor. I'm using the Corel Videostudio Pro X, can anybody recommend a software better suited to the task?

Monday, August 04, 2014

What is a singularity?

Not Von Neumann's urinal, but a
model of an essential singularity.
[Source: Wikipedia Commons.]
I recently read a bit around about the technological singularity, but it’s hard. It’s hard because I have to endure sentences like this:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
Ouch. Or this:
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
Aargh. Then I thought certainly they must have looked up the word in a dictionary, how difficult can it be? In the dictionary, I found this:
sin-gu-lar-i-ty
noun, plural sin-gu-lar-i-ties for 2–4.

1. the state, fact, or quality of being singular.
2. a singular, unusual, or unique quality; peculiarity.
3. Mathematics, singular point.
4. Astronomy (in general relativity) the mathematical representation of a black hole.”
I don’t even know where to start complaining. Yes, I did realize that black holes and event horizons made it into pop culture, but little did I realize that something as seemingly simple as the word “singularity” is surrounded by such misunderstanding.

Von Neumann.

Let me start with some history. Contrary to what you read in many places, it was not Vernor Vinge who first used the word “singularity” to describe a possible breakdown of predictability in technological development, it was von Neumann.

Von Neumann may be known to you as the man behind the Von Neumann entropy. He was a multiple talented genius, one of a now almost extinct breed, who contributed to many disciplines in math and physics, and what are now interdisciplinary fields like game theory or quantum information.

In Chapter 16 (p 157) of Stanislav Ulam’s biography of Von Neumann, published in 1958, one reads:
“One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
The term “singularity” was then picked up in 1993 by Vinge who coined the expression “technological singularity”. But let us dwell for a moment on the above Von Neumann quote. Ulam speaks of an “essential singularity”. You may be forgiven mistaking the adjective “essential” as a filler, but “essential singularity” is a technical expression, typically found in the field of complex analysis.

A singularity in mathematics is basically a point in which a function is undefined. Now it might be undefined just because you didn’t define it, but it is possible to continue the function through that point. In this case the singularity is said to be removable and, in some sense, just isn’t an interesting singularity, so let us leave this aside.

What one typically means with a singularity is a point where a function behaves badly, so that one or several of its derivatives diverge, that is they go to infinity. The ubiquitous example in school math is the poles of inverse powers of x, which diverge with x to zero.

However, such poles are not malign, you can remove them easily enough by multiplying the function with the respective positive power. Of course this gives you a different function, but this function still carries much of the information of the original function, notably all the coefficients in a series expansion. This procedure of removing poles (or creating poles) is very important in complex analysis where it is necessary to obtain the “residuals” of a function.

Some singularities however cannot be removed by multiplication with any positive power. These are those cases in which the function contains an infinite number of negative powers, the most commonly used example is exp(-1/x) at x=0. Such a singularity is said to be “essential”. Please appreciate the remarkable fact that the function itself does not diverge for x to zero, but neatly goes to zero! So do all its derivatives!!

So what did von Neumann mean with referring to an essential singularity?

From the context it seems he referred to the breakdown of predictability at this point. If all derivatives of a function are zero, you cannot make a series expansion (neither Taylor nor Laurent) around that point. If you hit that point, you don’t know what happens next, basically. This is a characteristic feature of essential singularities. (The radius of convergence cannot be pushed through the singular point.)

However, predictability of the laws of nature that we have (so far) never breaks down in this very sense. It breaks down because the measurement in quantum theory is non-deterministic, but that has for all we know nothing to do with essential singularites. (Yes, I’ve tried to make this connection. I’ve always been fond of essential singularities. Alas, not even the Templeton Foundation wanted anything to do with my great idea. So much about the reality of research.)

Geodesic incompleteness.
Artist's impression.
The other breakdown of predictability that we know of are singularities in general relativity. These are not technically essential singularities if you ask for the behavior of certain observables – they are typically poles or conical singularities. But they bear a resemblance to essential singularities by a concept known as “geodesic incompleteness”. It basically means that there are curves in space-time which end at finite proper time and cannot be continued. It’s like hitting the wall at 32km.

The reason for the continuation being impossible is that a singularity is a singularity is a singularity, no matter how you got there. You lose all information about your past when you hit it. (This is why, incidentally, the Maldacena-Horowitz proposal to resolve the black hole information loss by putting initial conditions on the singularity makes a lot of sense to me. Imho a totally under-appreciated idea.)

A common confusion about black holes concerns the nature of the event horizon. You can construct certain quantities of the black hole spacetime that diverge at the event horizon. In the mathematical sense they are singular, and that did confuse many people after the black hole space-time was first derived, in the middle of the last century. But it was quickly understood that these quantities do not correspond to physical observables. The physically relevant singularity is where geodesics end, at the center of the black hole. It corresponds to an infinitely large curvature. (This is an observer independent statement.) Nothing special happens upon horizon crossing, except that one can never get out again.

The singularity inside black holes is widely believed not to exist though, exactly because it implies a breakdown of predictability and causes the so paradoxical loss of information. The singularity is expected to be removed by quantum gravitational effects. The defining property of the black hole is the horizon, not the singularity. A black hole with the singularity removed is still a black hole. A singularity with the horizon removed is a naked singularity, no longer a black hole.

What has all of this to do with the technological singularity?

Nothing, really.

To begin with, there are like 17 different definitions for the technological singularity (no kidding). None of them has anything to do with an actual singularity, neither in the mathematical nor in the physical sense, and we have absolutely no reason to believe that the laws of physics or predictability in general breaks down within the next decades or so. In principle.

In practice, on some emergent level of an effective theory, I can see predictability becoming impossible. How do you want to predict what an artificial intelligence will do without having something more powerful than that artificial intelligence already? Not that anybody has been able to predict what averagely intelligent humans will do. Indeed one could say that predictability becomes more difficult with absence of intelligence, not the other way round, but I digress.

Having said all that, let us go back to these scary quotes from the beginning:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
The term singularity comes from mathematics. It does not mean “at the center of the black hole”, but it can be “like the center of a black hole”. Provided you are talking about the classical black hole solution, which is however believed to not be realized in nature.
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
There is no singularity at the black hole horizon, and predictability does not break down at the black hole horizon. You cannot see beyond a black hole horizon as long as you stay outside the black hole. If you jump in, you will see - and then die. But I don’t know what this has to do with technological development, or maybe I just didn’t read the facebook fineprint closely enough.

And finally there’s this amazing piece of nonsense:
“Singularity: Astronomy. (in general relativity) the mathematical representation of a black hole.”
To begin with General Relativity is not a field of astronomy. But worse, the “mathematical representation of a black hole” is certainly not a singularity. The mathematical representation of a (classical) black hole is the black hole spacetime and it contains a singularity.

And just in case you wondered, singularities have absolutely nothing to do with singing, except that you find both on my blog.