Astronomy Without A Telescope – Flat Universe

Caption...

[/caption]

A remarkable finding of the early 21st century, that kind of sits alongside the Nobel prize winning discovery of the universe’s accelerating expansion, is the finding that the universe is geometrically flat. This is a remarkable and unexpected feature of a universe that is expanding – let alone one that is expanding at an accelerated rate – and like the accelerating expansion, it is a key feature of our current standard model of the universe.

It may be that the flatness is just a consequence of the accelerating expansion – but to date this cannot be stated conclusively.

As usual, it’s all about Einstein. The Einstein field equations enable the geometry of the universe to be modelled – and a great variety of different solutions have been developed by different cosmology theorists. Some key solutions are the Friedmann equations, which calculate the shape and likely destiny of the universe, with three possible scenarios:
closed universe – with a contents so dense that the universe’s space-time geometry is drawn in upon itself in a hyper-spherical shape. Ultimately such a universe would be expected to collapse in on itself in a big crunch.
open universe – without sufficient density to draw in space-time, producing an outflung hyperbolic geometry – commonly called a saddle-shape – with a destiny to expand forever.
flat universe – with a ‘just right’ density – although an unclear destiny.

The Friedmann equations were used in twentieth century cosmology to try and determine the ultimate fate of our universe, with few people thinking that the flat scenario would be a likely finding – since a universe might be expected to only stay flat for a short period, before shifting to an open (or closed) state because its expansion (or contraction) would alter the density of its contents.

Matter density was assumed to be key to geometry – and estimates of the matter density of our universe came to around 0.2 atoms per cubic metre, while the relevant part of the Friedmann equations calculated that the critical density required to keep our universe flat would be 5 atoms per cubic metre. Since we could only find 4% of the required critical density, this suggested that we probably lived in an open universe – but then we started coming up with ways to measure the universe’s geometry directly.

There’s a You-Tube of Lawrence Krauss (of Physics of Star Trek fame) explaining how this is done with cosmic microwave background data (from WMAP and earlier experiments) – where the CMB mapped on the sky represents one side of a triangle with you at its opposite apex looking out along its two other sides. The angles of the triangle can then be measured, which will add up to 180 degrees in a flat (Euclidean) universe, more than 180 in a closed universe and less than 180 in an open universe.

These findings, indicating that the universe was remarkably flat, came at the turn of the century around the same time that the 1998 accelerated expansion finding was announced.

Although the contents of the early universe may have just been matter, we now must add dark energy to explain the universe's persistent flatness. Credit: NASA.

So really, it is the universe’s flatness and the estimate that there is only 4% (0.2 atoms per metre) of the matter density required to keep it flat that drives us to call on dark stuff to explain the universe. Indeed we can’t easily call on just matter, light or dark, to account for how our universe sustains its critical density in the face of expansion, let alone accelerated expansion – since whatever it is appears out of nowhere. So, we appeal to dark energy to make up the deficit – without having a clue what it is.

Given how little relevance conventional matter appears to have in our universe’s geometry, one might question the continuing relevance of the Friedmann equations in modern cosmology. There is more recent interest in the De Sitter universe, another Einstein field equation solution which models a universe with no matter content – its expansion and evolution being entirely the result of the cosmological constant.

De Sitter universes, at least on paper, can be made to expand with accelerating expansion and remain spatially flat – much like our universe. From this, it is tempting to suggest that universes naturally stay flat while they undergo accelerated expansion – because that’s what universes do, their contents having little direct influence on their long-term evolution or their large-scale geometry.

But who knows really – we are both literally and metaphorically working in the dark on this.

Further reading:

Krauss: Why the universe probably is flat (video).

Astronomy Without A Telescope – Light Speed

The effect of time dilation is negligible for common speeds, such as that of a car or even a jet plane, but it increases dramatically when one gets close to the speed of light.

[/caption]

The recent news of neutrinos moving faster than light might have got everyone thinking about warp drive and all that, but really there is no need to imagine something that can move faster than 300,000 kilometres a second.

Light speed, or 300,000 kilometres a second, might seem like a speed limit, but this is just an example of 3 + 1 thinking – where we still haven’t got our heads around the concept of four dimensional space-time and hence we think in terms of space having three dimensions and think of time as something different.

For example, while it seems to us that it takes a light beam 4.3 years to go from Earth to the Alpha Centauri system, if you were to hop on a spacecraft going at 99.999 per cent of the speed of light you would get there in a matter of days, hours or even minutes – depending on just how many .99s you add on to that proportion of light speed.

This is because, as you keep pumping the accelerator of your imaginary star drive system, time dilation will become increasingly more pronounced and you will keep getting to your destination that much quicker. With enough .999s you could cross the universe within your lifetime – even though someone you left behind would still only see you moving away at a tiny bit less than 300,000 kilometres a second. So, what might seem like a speed limit at first glance isn’t really a limit at all.

The effect of time dilation is negligible for common speeds we are familiar with on Earth, but it increases dramatically and asymptotically as you approach the speed of light.

To try and comprehend the four dimensional perspective on this, consider that it’s impossible to move across any distance without also moving through time. For example, walking a kilometer may be a duration of thirty minutes – but if you run, it might only take fifteen minutes.

Speed is just a measure of how long it takes you reach a distant point. Relativity physics lets you pick any destination you like in the universe – and with the right technology you can reduce your travel time to that destination to any extent you like – as long as your travel time stays above zero.

That is the only limit the universe really imposes on us – and it’s as much about logic and causality as it is about physics. You can travel through space-time in various ways to reduce your travel time between points A and B – and you can do this up until you almost move between those points instantaneously. But you can’t do it faster than instantaneously because you would arrive at B before you had even left A.

If you could do that, it would create impossible causality problems – for example you might decide not to depart from point A, even though you’d already reached point B. The idea is both illogical and a breach of the laws of thermodynamics, since the universe would suddenly contain two of you.

So, you can’t move faster than light – not because of anything special about light, but because you can’t move faster than instantaneously between distant points. Light essentially does move instantaneously, as does gravity and perhaps other phenomena that we are yet to discover – but we will never discover anything that moves faster than instantaneously, as the idea makes no sense.

We mass-laden beings experience duration when moving between distant points – and so we are able to also measure how long it takes an instantaneous signal to move between distant points, even though we could never hope to attain such a state of motion ourselves.

We are stuck on the idea that 300,000 kilometres a second is a speed limit, because we intuitively believe that time runs at a constant universal rate. However, we have proven in many different experimental tests that time clearly does not run at a constant rate between different frames of reference. So with the right technology, you can sit in your star-drive spacecraft and make a quick cup of tea while eons pass by outside. It’s not about speed, it’s about reducing your personal travel time between two distant points.

As Woody Allen once said: Time is nature’s way of keeping everything from happening at once. Space-time is nature’s way of keeping everything from happening in the same place at once.

Astronomy Without A Telescope – FTL Neutrinos (Or Not)

Location of the Grand Sasso OPERA neutrino experiment in Italy - which receives a beam of neutrinos from CERN - and at faster than the speed of light if you can believe it. Credit: CERN.

[/caption]

The recent news from the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) neutrino experiment, that neutrinos have been clocked travelling faster than light, made the headlines over the last week – and rightly so. There are some very robust infrastructure and measurement devices involved that give the data a certain gravitas.

The researchers had appropriate cause to put their findings up for public scrutiny and peer review – and to their credit have produced a detailed paper on the subject, beyond just the media releases we have seen. Nonetheless, it has been reported that some senior members of the OPERA research team declined to be associated with this paper, considering that it was all a bit preliminary.

After all, the reported results indicate that the neutrinos crossed a distance of 730 kilometres in 60 nanoseconds less time than light would have taken. But given that light would have taken 2.4 million nanoseconds to cross the same distance – there is a lot hanging on such a proportionally tiny difference.

It would have been a different story if the neutrinos had been clocked at 1.5x or 2x light speed, but this is more like 1.0025x light speed. And it would have been no surprise to anyone to have found the neutrinos travelling at 99.99% of light speed, given their association with the Large Hadron Collider. So, confirming that they really are exceeding light speed, but only by a tiny amount, requires supreme confidence in the measuring systems used. And there are reasons to doubt that there are grounds for such confidence.

The distance component of the speed calculation had an error of less than 20 cm out of the 730 kilometres path, or 0.00003% if you like, over the data collection period. That’s not much error, but then the degree to which the neutrinos are claimed to have moved faster than light isn’t that much either.

But the travel time component of the speed calculation is the real question mark here. The release time of neutrinos from the source could only be inferred as arising from a 10.5 microsecond burst of protons from the CERN Super Proton Synchrotron (SPS) – fired at a graphite target, which then releases neutrinos towards OPERA.

The researchers substantially restrained the potential error (i.e. 10.5 microseconds) by comparing the time distributions of SPS proton release and neutrino detection at OPERA over repeated trials, to give a probability density function of the time of emission of the neutrinos. But this is really just a long-winded way of saying they could only estimate the likely travel time, more or less. And the dependence on GPS satellite links to time stamp the release and detection steps represents a further source of potential measurement error.

Some of the complex infrastructure required to infer the travel time of neutrinos across the OPERA experiment. Credit Adam et al.

It’s also important to note that this was not a race. The 730 kilometre straight-line pathway to OPERA is through the Earth’s crust – which is virtually transparent to neutrinos, but opaque to light. The travel time of light is hence inferred from measuring the path distance. So it was never the case that the neutrinos were seen to beat a light beam across the path distance.

The real problem with the OPERA experiment is that the calculated bettering of light speed is a very tiny margin that has been measured over a relatively short path distance. If the experiment could be repeated by firing at a neutrino detector on the Moon say, that longer path distance would deliver more robust and more convincing data – since, if the OPERA effect is real, the neutrinos should very obviously reach the Moon quicker than a light beam could.

Until then, it all seems a bit premature to start throwing out the physics textbooks.

Further reading:
Adam et al Measurement of the neutrino velocity with the OPERA detector in the CNGS beam.

A contrary view – including reports than not all the Gran Sasso team are on board with the FTL neutrino idea.

365 Days of Astronomy Now More Than 1,000 Days

September 27 2011 was the 1,000th day since the 365 Days of Astronomy podcast was instituted on 1 January 2009, the International Year of Astronomy – and due to a puzzling publishing hiccup the 1,000th episode played on September 28 2011.

This unique citizen scientist project will hopefully stumble on through to the end of 2011, but if anyone wants to see it have a life after that your support and contributions are needed today – and every day after that.

Astronomy Without A Telescope – Star Formation Laws

NGC 1569 - a relatively close (11 million light years) starburst galaxy - presumably a result of fairly efficient star formation Credit: NASA/HST

[/caption]

Take a cloud of molecular hydrogen add some turbulence and you get star formation – that’s the law. The efficiency of star formation (how big and how populous they get) is largely a function of the density of the initial cloud.

At a galactic or star cluster level, a low gas density will deliver a sparse population of generally small, dim stars – while a high gas density should result in a dense population of big, bright stars. However, overlying all this is the key issue of metallicity – which acts to reduce star formation efficiency.

So firstly, the strong relationship between the density of molecular hydrogen (H2) and star formation efficiency is known as the Kennicutt-Schmidt Law. Atomic hydrogen is not considered to be able to support star formation, because it is too hot. Only when it cools to form molecular hydrogen can it start to clump together – after which we can expect star formation to become possible. Of course, this creates some mystery about how the first stars might have formed within a denser and hotter primeval universe. Perhaps dark matter played a key role there.

Nonetheless, in the modern universe, unbound gas can more readily cool down to molecular hydrogen due the presence of metals, which have been added to the interstellar medium by previous populations of stars. Metals, which are any elements heavier than hydrogen and helium, are able to absorb a wider range of radiation energy levels, leaving hydrogen less exposed to heating. Hence, a metal-rich gas cloud is more likely to form molecular hydrogen, which is then more likely to support star formation.

But this does not mean that star formation is more efficient in the modern universe – and again this is because of metals. A recent paper about the dependence of star formation on metallicity proposes that a cluster of stars develops from H2 clumping within a gas cloud, first forming prestellar cores which draw in more matter via gravity, until they become stars and then begin producing stellar wind.

Relationship between the power of stellar winds and stellar mass (i.e. big star has big wind) - with the effect of metallicity overlaid. The solid line is the metallicity of the Sun (Z=Zsol). High metallicity produces more powerful winds for the same stellar mass. Credit: Dib et al.

Before long, the stellar wind begins to generate ‘feedback’, countering the infall of further material. Once the outward push of stellar wind achieves unity with the inward gravitational pull, further star growth ceases – and bigger O and B class stars clear out any remaining gas from the cluster region, so that all star formation is quenched.

The dependence of star formation efficiency on metallicity arises from the effect of metallicity on stellar wind. High metal stars always have more powerful winds than any equivalent mass, but lower metal, stars. Thus, a star cluster – or even a galaxy – formed from a gas cloud with high metallicity, will have lower efficiency star formation. This is because all stars’ growth is inhibited by their own stellar wind feedback in late stages of growth and any large O or B class stars will clear out any remaining unbound gas more quickly than their low metal equivalents.

This metallicity effect is likely to be the product of ‘radiative line acceleration’, arising from the ability of metals to absorb radiation across a wide range of radiation energy levels – that is, metals present many more radiation absorption lines than hydrogen has on its own. The absorption of radiation by an ion means that some of the momentum energy of a photon is imparted to the ion, to the extent that such ions may be blown out of the star as stellar wind. The ability of metals to absorb more radiation energy than hydrogen can, means you should always get more wind (i.e. more ions blown out) from high metal stars.

Further reading:
Dib et al. The Dependence of the Galactic Star Formation Laws on Metallicity.

Carnival of Space #215

This week’s Carnival of Space is hosted by our very own Steve Nerlich at his very own Cheap Astronomy website.

Click here to read the Carnival of Space #215. Steve, as usual, has gone above and beyond the call of duty and has also created a podcast version of this Week’s Carnival.

And if you’re interested in looking back, here’s an archive to all the past Carnivals of Space. If you’ve got a space-related blog, you should really join the carnival. Just email an entry to [email protected], and the next host will link to it. It will help get awareness out there about your writing, help you meet others in the space community – and community is what blogging is all about. And if you really want to help out, sign up to be a host. Send and email to the above address.

Astronomy Without A Telescope – The Edge Of Significance

A two hemisphere spherical mapping of the cosmic microwave background. Credit: WMAP/NASA.

[/caption]

Some recent work on Type 1a supernovae velocities suggests that the universe may not be as isotropic as our current standard model (LambdaCDM) requires it to be.

The standard model requires the universe to be isotropic and homogeneous – meaning it can be assumed to have the same underlying structure and principles operating throughout and it looks measurably the same in every direction. Any significant variation from this assumption means the standard model can’t adequately describe the current universe or its evolution. So any challenge to the assumption of isotropy and homogeneity, also known as the cosmological principle, is big news.

Of course since you are hearing about such a paradigm-shifting finding within this humble column, rather than as a lead article in Nature, you can safely assume that the science is not quite bedded down yet. The Union2 data set of 557 Type 1a supernovae, released in 2010, is allegedly the source of this latest challenge to the cosmological principle – even though the data set was released with the unequivocal statement that the flat concordance LambdaCDM model remains an excellent fit to the Union2 data.

Anyhow, in 2010 Antoniou and Perivolaropoulos ran a hemisphere comparison – essentially comparing supernova velocities in the northern hemisphere of the sky with the southern hemisphere. These hemispheres were defined using galactic coordinates, where the orbital plane of the Milky Way is set as the equator and the Sun, which is more or less on the galactic orbital plane, is the zero point.

The galactic coordinate system. Credit: thinkastronomy.com

Antoniou and Perivolaropoulos’ analysis determined a preferred axis of anisotropy – with more supernovae showing higher than average velocities towards a point in the northern hemisphere (within the same ranges of redshift). This suggests that a part of the northern sky represents a part of the universe that is expanding outwards with a greater acceleration than elsewhere. If correct, this means the universe is neither isotropic nor homogeneous.

However, they note that their statistical analysis does not necessarily correspond with statistically significant anisotropy and then seek to strengthen their finding by appealing to other anomalies in cosmic microwave background data which also show anisotropic tendencies. So this seems to be a case of looking at number of unrelated findings with common trends – that in isolation are not statistically significant – and then arguing that if you put all these together they somehow achieve a consolidated significance that they did not possess in isolation.

More recently, Cai and Tuo ran much the same hemispherical analysis and, not surprisingly, got much the same result. They then tested whether these data favoured one dark energy model over another – which they didn’t. Nonetheless, on the strength of this, Cai and Tuo gained a write up in the Physics Arxiv blog under the heading More Evidence for a Preferred Direction in Spacetime – which seems a bit of a stretch since it’s really just the same evidence that has been separately analysed for another purpose.

It’s reasonable to doubt that anything has been definitively resolved at this point. The weight of current evidence still favours an isotropic and homogeneous universe. While there’s no harm in mucking about at the edge of statistical significance with whatever limited data are available – such fringe findings may be quickly washed away when new data comes in – e.g. more Type 1a supernovae velocity measures from a new sky survey – or a higher resolution view of the cosmic microwave background from the Planck spacecraft. Stay tuned.

Further reading:
– Antoniou and Perivolaropoulos. Searching for a Cosmological Preferred Axis: Union2 Data Analysis and Comparison with Other Probes.
– Cai and Tuo. Direction Dependence of the Deceleration Parameter.

Astronomy Without A Telescope – New Physics?

The Sun affects a lot of things on earth – but radioactive decay isn’t normally considered to be one of those things. Credit: NASA.

[/caption]

Radioactive decay – a random process right? Well, according to some – maybe not. For several years now a team of physicists from Purdue and Stanford have reviewed isotope decay data across a range of different isotopes and detectors – seeing a non-random pattern and searching for a reason. And now, after eliminating all other causes – the team are ready to declare that the cause is… extraterrestrial.

OK, so it’s suggested to just be the Sun – but cool finding, huh? Well… maybe it’s best to first put on your skeptical goggles before reading through anyone’s claim of discovering new physics.

Now, it’s claimed that there is a certain periodicity to the allegedly variable radioactive decay rates. A certain annual periodicity suggests a link to the varying distance from the Sun to the Earth, as a result of the Earth’s elliptical orbit – as well as there being other overlying patterns of periodicity that may link to the production of large solar flares and the 11 year (or 22 year if you prefer) solar cycle.

However, the alleged variations in decay rates are proportionally tiny and there remain a good deal of critics citing disconfirming evidence to this somewhat radical idea. So before drawing any conclusions here, maybe we need to first consider what exactly good science is:

Replication – a different laboratory or observatory can collect the same data that you claim to have collected.
A signal stronger than noise – there is a discrete trend existent within your data that has a statistically significant difference from the random noise existent within your data.
A plausible mechanism – for example, if the rate of radioactive decay seems to correlate with the position and magnetic activity of the Sun – why is this so?
A testable hypothesis – the plausible mechanism proposed should allow you to predict when, or under what circumstances, the effect can be expected to occur again.

The proponents of variable radioactive decay appeal to a range of data sources to meet the replication criterion, but independent groups equally appeal to other data sources which are not consistent with variable radioactive decay. So, there’s still a question mark here – at least until more confirming data comes in, to overwhelm any persisting disconfirming data.

Whether there is a signal stronger than noise is probably the key point of debate. The alleged periodic variations in radioactive decay are proportionally tiny variations and it’s not clear whether a compellingly clear signal has been demonstrated.

An accompanying paper outlines the team’s proposed mechanism – although this is not immediately compelling either. They appeal to neutrinos, which are certainly produced in abundance by the Sun, but actually propose a hypothetical form that they call ‘neutrellos’, which necessarily interact with atomic nuclei more strongly than neutrinos are considered to do. This creates a bit of a circular argument – because we think there is an effect currently unknown to science, we propose that it is caused by a particle currently unknown to science.

So, in the context of having allegedly found a periodic variability in radioactive decay, what the proponents need to do is to make a prediction – that sometime next year, say at a particular latitude in the northern hemisphere, the radioactive decay of x isotope will measurably alter by z amount compared to an equivalent measure made, say six months earlier. And maybe they could collect some neutrellos too.

If that all works out, they could start checking the flight times to Sweden. But one assumes that it won’t be quite that easy.

The case for:
– Jenkins et al. Analysis of Experiments Exhibiting Time-Varying Nuclear Decay Rates: Systematic Effects or New Physics?  (the data)
– Fischbach et al. Evidence for Time-Varying Nuclear Decay Rates: Experimental Results and Their Implications for New Physics.  (the mechanism)

The case against:
– Norman et al. Evidence against correlations between nuclear decay rates and Earth–Sun distance.
The relevant Wikipedia entry

Astronomy Without A Telescope – Cosmic Coincidence

caption...

[/caption]

Cosmologists tend not to get all that excited about the universe being 74% dark energy and 26% conventional energy and matter (albeit most of the matter is dark and mysterious as well). Instead they get excited about the fact that the density of dark energy is of the same order of magnitude as that more conventional remainder.

After all, it is quite conceivable that the density of dark energy might be ten, one hundred or even one thousand times more (or less) than the remainder. But nope, it seems it’s about three times as much – which is less than ten and more than one, meaning that the two parts are of the same order of magnitude. And given the various uncertainties and error bars involved, you might even say the density of dark energy and of the more conventional remainder are roughly equivalent. This is what is known as the cosmic coincidence.

To a cosmologist, particularly a philosophically-inclined cosmologist, this coincidence is intriguing and raises all sorts of ideas about why it is so. However, Lineweaver and Egan suggest this is actually the natural experience of any intelligent beings/observers across the universe, since their evolution will always roughly align with the point in time at which the cosmic coincidence is achieved.

A current view of the universe describes its development through the following steps:

Inflationary era – a huge whoomp of volume growth driven by something or other. This is a very quick era lasting from 10-35 to 10-32 of the first second after the Big Bang.
Radiation dominated era – the universe continues expanding, but at a less furious rate. Its contents cools as their density declines. Hadrons begin to cool out from hot quark-gluon soup while dark matter forms out of whatever it forms out of – all steadily adding matter to the universe, although radiation still dominates. This era lasts for maybe 50,000 years.
Matter dominated era – this era begins when the density of matter exceeds the density of radiation and continues through to the release of the cosmic microwave background radiation at 380,000 years, when the first atoms formed – and then continues on for a further 5 billion years. Throughout this era, the energy/matter density of the whole universe continues to gravitationally restrain the rate of expansion of the universe, even though expansion does continue.
Cosmological constant dominated era – from 5 billion years to now (13.7 billion) and presumably for all of hereafter, the energy/matter density of the universe is so diluted that it begins losing its capacity to restrain the expansion of universe – which hence accelerates. Empty voids of space grow ever larger between local clusters of gravitationally-concentrated matter.

And here we are. Lineweaver and Egan propose that it is unlikely that any intelligent life could have evolved in the universe much earlier than now (give or take a couple of billion years) since you need to progressively cycle through the star formation and destruction of Population III, II and then I stars to fill the universe with sufficient ‘metals’ to allow planets with evolutionary ecosystems to develop.

The four eras of the universe mapped over a logarithmic time scale. Note that "Now" occurs as the decline in matter density and the acceleration in cosmic expansion cross over. Credit: Lineweaver and Egan.

So any intelligent observer in this universe is likely to find the same data which underlie the phenomenon we call the cosmological coincidence. Whether any aliens describe their finding as a ‘coincidence’ may depend upon what mathematical model they have developed to formulate the cosmos. It’s unlikely to be the same one we are currently running with – full of baffling ‘dark’ components, notably a mysterious energy that behaves nothing like energy.

It might be enough for them to note that their observations have been taken at a time when the universe’s contents no longer have sufficient density to restrain the universe’s inherent tendency to expand – and so it expands at a steadily increasing rate.

Further reading: Lineweaver and Egan. The Cosmic Coincidence as a Temporal Selection Effect Produced by the Age Distribution of Terrestrial Planets in the Universe (subsequently published in Astrophysical Journal 2007, Vol 671, 853.)

Astronomy Without A Telescope – Why The LHC Won’t Destroy The Earth

Concerns about a 'big science machine' destroying the Earth have been around since the steam engine. The LHC is the latest target for such conspiracy theories. Credit: CERN.

[/caption]

Surprisingly, rumors still persist in some corners of the Internet that the Large Hadron Collider (LHC) is going to destroy the Earth – even though nearly three years have passed since it was first turned on. This may be because it is yet to be ramped up to full power in 2014 – although it seems more likely that this is just a case of moving the goal posts, since the same doomsayers were initially adamant that the Earth would be destroyed the moment the LHC was switched on, in September 2008.

The story goes that the very high energy collisions engineered by the LHC could jam colliding particles together with such force that their mass would be compressed into a volume less than the Schwarzschild radius required for that mass. In other words, a microscopic black hole would form and then grow in size as it sucked in more matter, until it eventually consumed the Earth.

Here’s a brief run-through of why this can’t happen.

1. Microscopic black holes are implausible.
While a teaspoon of neutron star material might weigh several million tons, if you extract a teaspoon of neutron star material from a neutron star it will immediately blow out into the volume you might expect several million tons of mass to usually occupy.

Notwithstanding you can’t physically extract a teaspoon of black hole material from a black hole – if you could, it is reasonable to expect that it would also instantly expand. You can’t maintain these extreme matter densities outside of a region of extreme gravitational compression that is created by the proper mass of a stellar-scale object.

The hypothetical physics that might allow for the creation of microscopic black holes (large extra dimensions) proposes that gravity gains more force in near-Planck scale dimensions. There is no hard evidence to support this theory – indeed there is a growing level of disconfirming evidence arising from various sources, including the LHC.

High energy particle collisions involve converting momentum energy into heat energy, as well as overcoming the electromagnetic repulsion that normally prevents charged particles from colliding. But the heat energy produced quickly dissipates and the collided particles fragment into sub-atomic shrapnel, rather than fusing together. Particle colliders attempt to mimic conditions similar to the Big Bang, not the insides of massive stars.

2. A hypothetical microscopic black hole couldn’t devour the Earth anyway.
Although whatever goes on inside the event horizon of a black hole is a bit mysterious and unknowable – physics still operates in a conventional fashion outside. The gravitational influence exerted by the mass of a black hole falls away by the inverse square of the distance from it, just like it does for any other celestial body.

The gravitational influence exerted by a microscopic black hole composed of, let’s say 1000 hyper-compressed protons, would be laughably small from a distance of more than its Schwarzschild radius (maybe 10-18 metres). And it would be unable to consume more matter unless it could overcome the forces that hold other matter together – remembering that in quantum physics, gravity is the weakest force.

It’s been calculated that if the Earth had the density of solid iron, a hypothetical microscopic black hole in linear motion would be unlikely to encounter an atomic nucleus more than once every 200 kilometres – and if it did, it would encounter a nucleus that would be at least 1,000 times larger in diameter.

So the black hole couldn’t hope to swallow the whole nucleus in one go and, at best, it might chomp a bit off the nucleus in passing – somehow overcoming the strong nuclear force in so doing. The microscopic black hole might have 100 such encounters before its momentum carried it all the way through the Earth and out the other side, at which point it would probably still be a good order of magnitude smaller in size than an uncompressed proton.

And that still leaves the key issue of charge out of the picture. If you could jam multiple positively-charged protons together into such a tiny volume, the resultant object should explode, since the electromagnetic force far outweighs the gravitational force at this scale. You might get around this if an exactly equivalent number of electrons were also added in, but this requires appealing to an implausible level of fine-tuning.

You maniacs! You blew it up! We may not be walking on the Moon again any time soon - but we won't be destroying the Earth with an ill-conceived physics experiment any time soon either. Credit: Dean Reeves.

3. What the doomsayers say
When challenged with the standard argument that higher-than-LHC energy collisions occur naturally and frequently as cosmic ray particles collide with Earth’s upper atmosphere, LHC conspiracy theorists refer to the high school physics lesson that two cars colliding head-on is a more energetic event than one car colliding with a brick wall. This is true, to the extent that the two car collision has twice the kinetic energy as the one car collision. However, cosmic ray collisions with the atmosphere have been measured as having 50 times the energy that will ever be generated by LHC collisions.

In response to the argument that a microscopic black hole would pass through the Earth before it could achieve any appreciable mass gain, LHC conspiracy theorists propose that an LHC collision would bring the combined particles to a dead stop and they would then fall passively towards the centre of the Earth with insufficient momentum to carry them out the other side.

This is also implausible. The slightest degree of transverse momentum imparted to LHC collision fragments after a head-on collision of two particles travelling at nearly 300,000 kilometres a second will easily give those fragments an escape velocity from the Earth (which is only 11.2 kilometres a second, at sea-level).

Further reading: CERN The safety of the LHC.