Astronomy Without A Telescope – Cosmic Coincidence

caption...

[/caption]

Cosmologists tend not to get all that excited about the universe being 74% dark energy and 26% conventional energy and matter (albeit most of the matter is dark and mysterious as well). Instead they get excited about the fact that the density of dark energy is of the same order of magnitude as that more conventional remainder.

After all, it is quite conceivable that the density of dark energy might be ten, one hundred or even one thousand times more (or less) than the remainder. But nope, it seems it’s about three times as much – which is less than ten and more than one, meaning that the two parts are of the same order of magnitude. And given the various uncertainties and error bars involved, you might even say the density of dark energy and of the more conventional remainder are roughly equivalent. This is what is known as the cosmic coincidence.

To a cosmologist, particularly a philosophically-inclined cosmologist, this coincidence is intriguing and raises all sorts of ideas about why it is so. However, Lineweaver and Egan suggest this is actually the natural experience of any intelligent beings/observers across the universe, since their evolution will always roughly align with the point in time at which the cosmic coincidence is achieved.

A current view of the universe describes its development through the following steps:

Inflationary era – a huge whoomp of volume growth driven by something or other. This is a very quick era lasting from 10-35 to 10-32 of the first second after the Big Bang.
Radiation dominated era – the universe continues expanding, but at a less furious rate. Its contents cools as their density declines. Hadrons begin to cool out from hot quark-gluon soup while dark matter forms out of whatever it forms out of – all steadily adding matter to the universe, although radiation still dominates. This era lasts for maybe 50,000 years.
Matter dominated era – this era begins when the density of matter exceeds the density of radiation and continues through to the release of the cosmic microwave background radiation at 380,000 years, when the first atoms formed – and then continues on for a further 5 billion years. Throughout this era, the energy/matter density of the whole universe continues to gravitationally restrain the rate of expansion of the universe, even though expansion does continue.
Cosmological constant dominated era – from 5 billion years to now (13.7 billion) and presumably for all of hereafter, the energy/matter density of the universe is so diluted that it begins losing its capacity to restrain the expansion of universe – which hence accelerates. Empty voids of space grow ever larger between local clusters of gravitationally-concentrated matter.

And here we are. Lineweaver and Egan propose that it is unlikely that any intelligent life could have evolved in the universe much earlier than now (give or take a couple of billion years) since you need to progressively cycle through the star formation and destruction of Population III, II and then I stars to fill the universe with sufficient ‘metals’ to allow planets with evolutionary ecosystems to develop.

The four eras of the universe mapped over a logarithmic time scale. Note that "Now" occurs as the decline in matter density and the acceleration in cosmic expansion cross over. Credit: Lineweaver and Egan.

So any intelligent observer in this universe is likely to find the same data which underlie the phenomenon we call the cosmological coincidence. Whether any aliens describe their finding as a ‘coincidence’ may depend upon what mathematical model they have developed to formulate the cosmos. It’s unlikely to be the same one we are currently running with – full of baffling ‘dark’ components, notably a mysterious energy that behaves nothing like energy.

It might be enough for them to note that their observations have been taken at a time when the universe’s contents no longer have sufficient density to restrain the universe’s inherent tendency to expand – and so it expands at a steadily increasing rate.

Further reading: Lineweaver and Egan. The Cosmic Coincidence as a Temporal Selection Effect Produced by the Age Distribution of Terrestrial Planets in the Universe (subsequently published in Astrophysical Journal 2007, Vol 671, 853.)

Q&A with Brian Cox, part 1: Recent Hints of the Higgs

Brian Cox at CERN with Kevin Eldon and Simon Munnery. Photo by Gia Milinovich, courtesy Brian Cox

[/caption]

At two separate conferences in July, particle physicists announced some provoking news about the Higgs boson, and while the Higgs has not yet been found, physicists are continuing to zero in on the elusive particle. Universe Today had the chance to talk with Professor Brian Cox about these latest findings, and he says that within six to twelve months, physicists should be able to make a definite statement about the existence of the Higgs particle. Cox is the Chair in Particle Physics at the University of Manchester, and works on the ATLAS experiment (A Toroidal LHC ApparatuS) at the Large Hadron Collider at CERN. But he’s also active in the popularization of science, specifically with his new television series and companion book, Wonders of the Universe, a follow up to the 2010 Peabody Award-winning series, Wonders of the Solar System.

Universe Today readers will have a chance to win a copy of the book, so stay tuned for more information on that. But today, enjoy the first of a three-part interview with Cox:


Universe Today: Can you tell us about your work with ATLAS and its potential for finding things like extra dimensions, the unification of forces or dark matter?

Brian Cox, during the filming of one of his television series. Image courtesy Brian Cox.

Brian Cox: The big question is the origin and mass of the universe. It is very, very important because it is not an end in itself. It is a fundamental part of Quantum Field Theory, which is our theory of three of the four forces of nature. So if you ask the question on the most basic level of how does the universe work, there are only two pillars of our understanding at the moment. There is Einstein’s Theory of General Relatively, which deals with gravity — the weakest force in the Universe that deals with the shape of space and time and all those things. But everything else – electromagnetism, the way the atomic nuclei works, the way molecules work, chemistry, all that – everything else is what’s called a Quantum Field Theory. Embedded in that is called the Standard Model of particle physics. And embedded in that is this mechanism for generating mass, and it’s just so fundamental. It’s not just kind of an interesting add-on, it’s right in the heart of the way the theory works.

So, understanding whether our current picture of the Universe is right — and if there is this thing called the Higgs mechanism or whether there is something else going on — is critical to our progress because it is built into that picture. There are hints in the data recently that maybe that mechanism is right. We have to be careful. It’s not a very scientific thing to say that we have hints. We have these thresholds for scientific discovery, and we have them for a reason, because you get these statistical flukes that appear in the data and when you get more data they go away again.

The statement from CERN now is that if they turn out to be more than just fluctuations, really, within six months we should be able to make some definite statement about the existence of the Higgs particle.

I think it is very important to emphasize that this is not just a lot of particle physicists looking for particles because that’s their job. It is the fundamental part of our understanding of three of the four forces of nature.

Brian Cox at Fermilab. Photo by Paul Olding.

UT : So these very interesting results from CERN and the Tevatron at Fermilab giving us hints about the Higgs, could you can talk little bit more about that and your take on the latest findings?

COX: The latest results were published in a set of conferences a few weeks ago and they are just under what is called the Three Sigma level. That is the way of assessing how significant the results are. The thing about all quantum theory and particle physics in general, is it is all statistical. If you do this a thousand times, then three times this should happen, and eight times that should happen. So it’s all statistics. As you know if you toss a coin, it can come up heads ten times, there is a probability for that to happen. It doesn’t mean the coin is weighted or there’s something wrong with it. That’s just how statistics is.

So there are intriguing hints that they have found something interesting. Both experiments at the Large Hadron Collider, the ATLAS and the Compact Muon Solenoid (CMS) recently reported “excess events” where there were more events than would be expected if the Higgs does not exist. It is about the right mass: we think the Higgs particle should be somewhere between about 120 and 150 gigaelectron volts [GeV—a unit of energy that is also a unit of mass, via E = mc2, where the speed of light, c, is set to a value of one] which is the expected mass range of the Higgs. These hints are around 140, so that’s good, it’s where it should be, and it is behaving in the way that it is predicted to by the theory. The theory also predicts how it should decay away, and what the probability should be, so all the data is that this is consistent with the so-called standard model Higgs.

But so far, these events are not consistently significant enough to make the call. It is important that the Tevatron has glimpsed it as well, but that has even a lower significance because that was low energy and not as many collisions there. So you’ve got to be scientific about things. There is a reason we have these barriers. These thresholds are to be cleared to claim discoveries. And we haven’t cleared it yet.

But it is fascinating. It’s the first time one of these rumors have been, you know, not just nonsense. It really is a genuine piece of exciting physics. But you have to be scientific about these things. It’s not that we know it is there and we’re just not going to announce it yet. It’s the statistics aren’t here yet to claim the discovery.

Brian Cox, while filming a BBC series in the Sahara. Image courtesy Brian Cox

UT : Well, my next question was going to be, what happens next? But maybe you can’t really answer that because all you can do is keep doing the research!

COX: The thing about the Higgs, it is so fundamentally embedded in quantum theory. You’ve got to explore it because it is one thing to see a hint of a new particle, but it’s another thing to understand how that particle behaves. There are lots of different ways the Higgs particles can behave and there are lots of different mechanisms.

There is a very popular theory called supersymmetry which also would explain dark matter, one of the great mysteries in astrophysics. There seems to be a lot of extra stuff in the Universe that is not behaving the way that particles of matter that we know of behave, and with five times more “stuff” as what makes up everything we can see in the Universe. We can’t see dark matter, but we see its gravitational influence. There are theories where we have a very strong candidate for that — a new kind of particle called a supersymmetry particles. There are five Higgs particles in them rather than one. So the next question is, if that is a Higgs-like particle that we’ve discovered, then what is it? How does it behave? How does it talk to the other particles?

And then there are a huge amount of questions. The Higgs theory as it is now doesn’t explain why the particles have the masses they do. It doesn’t explain why the top quark, which is the heaviest of the fundamental particles, is something like 180 times heavier than the proton. It’s a tiny point-like thing with no size but it’s 180 times the mass of a proton! That is heavier than some of the heaviest atomic nuclei!

Why? We don’t know.

I think it is correct to say there is a door that needs to be opened that has been closed in our understanding of the Universe for decades. It is so fundamental that we’ve got to open it before we can start answering these further questions, which are equally intriguing but we need this answered first.

UT: When we do get some of these questions answered, how is that going to change our outlook and the way that we do things, or perhaps the way YOU do things, anyway! Maybe not us regular folks…

COX: Well, I think it will – because this is part of THE fundamental theory of the forces of nature. So quantum theory in the past has given us an understanding, for example, of the way semiconductors work, and it underpins our understanding of modern technology, and the way chemistry works, the way that biological systems work – it’s all there. This is the theory that describes it all. I think having a radical shift and deepening in understanding of the basic laws of nature will change the way that physics proceeds in 21st century, without a doubt. It is that fundamental. So, who knows? At every paradigm shift in science, you never really could predict what it was going to do; but the history of science tells you that it did something quite remarkable.

There is a famous quote by Alexander Fleming, who discovered penicillin, who said that when he woke up on a certain September morning of 1928, he certainly didn’t expect to revolutionize modern medicine by discovering the world’s first antibiotic. He said that in hindsight, but he just discovered some mold, basically, but there it was.

But it was fundamental and that is the thing to emphasize.

Some of our theories, you look at them and wonder how we worked them! The answer is mathematically, the same way that Einstein came up with General Relativity, with mathematical predictions. It is remarkable we’ve been able to predict something so fundamental about the way that empty space behaves. We might turn out to be right.

Tomorrow: Part 2: The space exploration and hopes for the future

Find out more about Brian Cox at his website, Apollo’s Children

Astronomy Without A Telescope – A Photon’s Point Of View

What would you see at the speed of light/

[/caption]

From a photon’s point of view, it is emitted and then instantaneously reabsorbed. This is true for a photon emitted in the core of the Sun, which might be reabsorbed after crossing a fraction of a millimetre’s distance. And it is equally true for a photon that, from our point of view, has travelled for over 13 billion years after being emitted from the surface of one of the universe’s first stars.

So it seems that not only does a photon not experience the passage of time, it does not experience the passage of distance either. But since you can’t move a massless consciousness at the speed of light in a vacuum, the real point of this thought experiment is to indicate that time and distance are just two apparently different aspects of the same thing.

If we attempt to achieve the speed of light, our clocks will slow relative to our point of origin and we will arrive at our destination quicker that we anticipate that we should – as though both the travel time and the distance have contracted.

Similarly, as we approach the surface of a massive object, our clocks will slow relative to a point of higher altitude – and we will arrive at the surface quicker than we might anticipate, as though time and distance contract progressively as we approach the surface.

Again, time and distance are just two aspects of the same thing, space-time, but we struggle to visualise this. We have evolved to see the world in snapshot moments, perhaps because a failure to scan the environment with every step we take might leave us open to attack by a predator.

Science advocates and skeptics say that we should accept the reality of evolution in the same way that we accept the reality of gravity – but actually this is a terrible analogy. Gravity is not real, it’s just our dumbed-down interpretation of space-time curvature.

If you could include the dimension of time in this picture you might get a rough idea of why things appear to accelerate towards a massive object - even though they do not themselves experience any acceleration.

Astronauts moving at a constant velocity through empty space feel weightless. Put a planet in their line of trajectory and they will continue to feel weightless right up until the moment they collide with its surface.

A person on the surface will watch them steadily accelerate from high altitude until that moment of collision. But such doomed astronauts will not themselves experience any such change to their velocity. After all, if they were accelerating, surely they would be pushed back into their seat as a consequence.

Nonetheless, the observer on the planet’s surface is not suffering from an optical illusion when they perceive a falling spacecraft accelerate. It’s just that they fail to acknowledge their particular context of having evolved on the surface of a massive object, where space-time is all scrunched up.

So they see the spacecraft move from an altitude where distance and time (i.e. space-time) is relatively smooth – down to the surface, where space-time (from the point of view of a high altitude observer) is relatively scrunched up. A surface dweller hence perceives that a falling object is experiencing acceleration and wrongly assumes that there must be a force involved.

As for evolution – there are fossils, vestigial organs and mitochondrial DNA. Get real.

Footnote: If you were falling into a black hole you would still not experience acceleration. However, your physical structure would be required to conform to the extremely scrunched up space-time that you move through – and spaghettification would result.

Testing the Multiverse… Observationally!

Seven Year Microwave Sky (Credit: NASA/WMAP Science Team)

[/caption]The multiverse theory is famous for its striking imagery. Just imagine our own Universe, drifting among a veritable sea of spontaneously inflating “bubble universes”, each a self-contained and causally separate pocket of higher-dimensional spacetime. It’s quite an arresting picture. However, the theory is also famous for being one of the most criticized in all of cosmology. Why? For one, the idea is remarkably difficult, if not downright impossible, to test experimentally. But now, a team of British and Canadian scientists believe they may have found a way.

Attempts to prove the multiverse theory have historically relied upon examination of the CMB radiation, relic light from the Big Bang that satellites like NASA’s Wilkinson Microwave Anisotropy Probe, or WMAP, have probed with incredible accuracy. The CMB has already allowed astronomers to map the network of large-scale structure in today’s Universe from tiny fluctuations detected by WMAP. In a similar manner, some cosmologists have hoped to comb the CMB for disk-shaped patterns that would serve as evidence of collisions with other bubble universes.

Seven Year Microwave Sky (Credit: NASA/WMAP Science Team)

Now, physicists at University College London, Imperial College London and the Perimeter Institute for Theoretical Physics have designed a computer algorithm that actually examines the WMAP data for these telltale signatures. After determining what the WMAP results would look like both with and without cosmic collisions, the team uses the algorithm to determine which scenario fits best with the actual WMAP data. Once the results are in, the team’s algorithm performs a statistical analysis to ensure that any signatures that are detected are in fact due to collisions with other universes, and are unlikely to be due to chance. As an added bonus, the algorithm also puts an upper limit on the number of collision signatures astronomers are likely to find.

While their method may sound fairly straightforward, the researchers are quick to acknowledge the difficulty of the task at hand. As UCL researcher and co-author of the paper Dr. Hiranya Peiris put it, “It’s a very hard statistical and computational problem to search for all possible radii of the collision imprints at any possible place in the sky. But,” she adds, “that’s what pricked my curiosity.”

The results of this ground-breaking project are not yet conclusive enough to determine whether we live in a multiverse or not; however, the scientists remain optimistic about the rigor of their method. The team hopes to continue its research as the CMB is probed more deeply by the Planck satellite, which began its fifth all-sky survey on July 29. The research is published in Physical Review Letters and Physical Review D.

Source: UCL

Astronomy Without A Telescope – Bubblology

Multiverse hypotheses are all very well, but surely 'when worls collide' we should be able to determine the existence of the multiverse - but to date.... nup. Credit: cosmology.com

[/caption]

One model of a hypothetical multiverse has, perhaps appropriately, some similarity to a glass of beer. Imagine an eternal false vacuum – that’s a bit like a fluid, though not all that much like a fluid – since it doesn’t have volume, in fact it doesn’t have any spatial dimensions. Then imagine that this eternal false vacuum expands.

This sounds rather contradictory since expansion implies there are spatial dimensions, but a string theorist will assure you that it all happens at the sub-Planck scale, where lots of immeasurable and unknowable things can happen – and after a few more drinks you might be willing to go along with this.

So – next, we introduce bubbles to the false vacuum. The bubbles – which are essentially independent baby universes – are true vacuums and can rationally and reasonably expand since they have four overt dimensions of space-time – albeit they may also have the other immeasurable and unknowable dimensions in common with the encompassing false vacuum.

The bubbles are the reason why it is necessary for the false vacuum to expand, indeed it must expand faster than the bubbles – otherwise an expanding bubble universe could ‘percolate’ – that is, spread throughout the all-encompassing false vacuum – so that your multiverse would just become a universe. And where’s the fun in that?

Anyhow, within such an eternal expanding fluid, bubble universes may nucleate at random points – taking us away from the coffee analogy and back to the beer. In bubblology terms, nucleation is the precursor of inflation. The sub-Planck energy of the non-dimensional false vacuum occasionally suffers a kind of hiccup – perhaps a quantum tunnelling event – making the sub-Planck virtual nothingness commence a slow roll down a potential energy hill (whatever the heck that means).

At a certain point in that slow roll, the energy level shifts from a sub-Planck potential-ness into a supra-Planck actual-ness. This shift from sub-Planck to supra-Planck is thought to be a kind of phase transition from something ephemeral to a new ground state of something lasting and substantial – and that phase transition releases heat, kind of like how the phase transition from water to ice releases latent heat.

And so you get the characteristic production of a gargantuan amount of energy out of nothing, which we denizens of our own bubble universe parochially call the Big Bang – being the energy that drove an exponential cosmic inflation of our own bubble, that exponential inflation lasting until the energy density within the bubble was cool enough to form matter – in an e=mc2 kind of way. And so another bubble of persistent somethingness formed within the eternal beer of nothingness.

The light cone of our bubble universe showing the stages of the energy release driving cosmic inflation (reheating), the surface of last scattering (recombination) and the subsequent disolution of the cosmic fog (reionisation) - cosmic microwave background photons from the surface of last scattering could show signs of a collision with an adjacent bubble universe. Credit: Kleban.

Good story, huh? But, where’s the evidence? Well, there is none, but despite the usual criticisms lobbed at string theorists this is an area where they attempt to offer testable predictions.

Within a multiverse, one or more collisions with another bubble universe are almost inevitable given the beer-mediated timeframe of eternity. Such an event may yet lie in our future, but could equally lie in our past – the fact that we are still here indicating (anthropically) that such a collision may not be fatal.

A collision with another bubble might pass unnoticed if it possessed exactly the same cosmological constant as ours and its contents were roughly equivalent. The bubble wall collision might appear as a blue-shifted circle in the sky – perhaps like the Cold Spot in the cosmic microwave background, although this is most likely the result of a density fluctuation within our own universe.

We could be in trouble if an adjacent universe’s bubble wall pushed inwards on a trajectory towards us – and if it moved at the speed of light we wouldn’t see it until it hit. Even if the wall collision was innocuous, we might be in trouble if the adjacent universe was filled with antimatter. It’s these kind of factors that determine what we might observe – and whether we might survive such an, albeit hypothetical, event.

Further reading: Kleban. Cosmic bubble collisions.

Elliptical Galaxies Don’t Act Their Age…

The galaxy NGC 5557 clearly exhibits extremely extended and faint tidal streams spanning more than 1.2 million light-years from left to right on this image from the MegaCam mounted on the Canada-France-Hawaii Telescope. Image by P.-A. Duc 2011 (c) CEA/CFHT

[/caption]

Thanks to images taken with the MegaCam camera mounted on the Canada-France-Hawaii Telescope (CFHT, CNRC/CNRS/University of Hawaii), researchers are beginning to see that elliptical galaxies just aren’t acting their age. Their initial studies are showing signs of recent merging – meaning that many could be as much as five times younger than previously thought.

We’ve been studying massive elliptical galaxies for a long time and their stripped down stellar population has always led astronomers to assume most were in the 7 to 10 billion year old age bracket. However, astronomers from CNRS, CEA, CFHT, and the Observatoire de Lyon – all members of the Atlas3D international collaboration – have been sneaking a peak at the galactic fountain of youth. According to observations done on two elliptical galaxies (NGC 680 & NGC 5557), it would appear they’ve undergone a spiral galaxy merger… one that’s happened as recently as 1 to 3 billion years ago.

“Such age estimate is based on the presence of ultra faint filaments in the distant outskirts of the galaxies. These features called tidal streams in the astronomers parlance are typical residuals from a galaxy merger.” says the CFH team. “They are known not to survive in this shape and brightness for more than a few billion years, hence the new age estimate of the resulting elliptical galaxies. These structures were detected for the first time thanks to a very-deep imaging technique boosting the capabilities of CFHT’s wide-field optical imager MegaCam.”

A sample of elliptical galaxies from the Atlas3D survey current collection, all showing clear signs of a recent collision. Image by P.-A. Duc 2011 (c) CEA/CFHT

The Atlas3D team isn’t stopping with these results and they’re looking at a survey of more than one hundred elliptical galaxies close to the Milky Way. When the samples are gathered and compared, they’ll look for more faint extended features that could spell a recent merger. It could mean we need to rethink our standard model for elliptical galaxies formation!

Maybe even ask ’em for ID…

Original News Source: CFH News.

Astronomy Without A Telescope – Gravitational Waves

An artist's impression of gravitational waves. In reality, a single uniform massive object does not generate gravitational waves. However, a massive binary system in orbital motion, could generate dynamic pulses of gravitational energy that might be detected from Earth.

[/caption]

Gravitational waves have some similar properties to light. They move at the same speed in a vacuum – and with a certain frequency and amplitude. Where they differ from light is that they are not scattered or absorbed by matter, in the way that light is.

Thus, it’s likely that primordial gravitational waves, that are speculated to have been produced by the Big Bang, are still out there waiting to be detected and analyzed.

Gravitational waves have been indirectly detected via observations of pulsar PSR 1913+16, a member of a binary system, the orbit of which decays at the rate of approximately three millimetres per orbit. The inspiraling of the binary (i.e. the decay of its orbit) can only be explained by an invisible loss of energy, which we presume to be the result of gravitational waves transporting energy away from the system.

Direct observation of gravitational waves currently escapes us – but seems at least feasible by monitoring the alignment of widely separated test masses. Such monitoring systems are currently in place on Earth, including LIGO, which has test masses separated by up to four kilometres – that separation distance being monitored by lasers designed to detect tiny changes in that distance, which might result from the passage of a gravitational wave initiated from a distant point in the universe.

The passing of a gravitational wave should stretch and contract the Earth. This is not because it strikes the Earth and imparts kinetic energy to it – like an ocean wave hitting land. Instead, the Earth – which sits within space-time – has its geometry altered, so that it continues to fit the momentarily stretched and then contracted space-time within which it sits, as a gravitational wave passes.

The Laser Interferometer Gravitational-Wave Observatory (LIGO) Hanford installation. When you are talking gravitational wave astronomy, big is good. Credit: Caltech.

Gravitational waves are thought to be unaffected by interaction with matter and they move at the speed of light in a vacuum, regardless of whether or not they themselves are in a vacuum. They do lose amplitude (wave height) over distance, but only through attenuation. This is similar to the way that a water wave, emanating from the point of impact of a pebble dropped into a pond, loses amplitude proportionally to the square of the radius of the growing circle that it forms.

Gravity waves may also decline in frequency (i.e. increase in wavelength) over very large distances, due to the expansion of the universe – in much the same way that the wavelength of light is red-shifted by the expansion of the universe.

Given all this, the exceedingly tiny effects that are expected of the gravitational waves that may routinely pass by Earth create a substantial challenge for detection and measurement – since these tiny space-time fluctuations must be distinguished from any background noise.

The noise background for LIGO includes seismic noise (i.e. intrinsic movements of the Earth), instrument noise (i.e. temperature changes that affect the alignment of the detection equipment) and a quantum-level noise, also known as Johnson-Nyquist noise – which arises from the quantum indeterminacy of photon positions.

Kip Thorne, one of the big names in gravity wave theory and research, has apparently ironed out that last and perhaps most troublesome effect through the application of quantum non-demolition principles – which enable the measurement of something without destroying it, or without collapsing its wave function.

Nonetheless, the need for invoking quantum non-demolition principles is some indication of the exceedingly faint nature of gravitational waves – which have a generally weak signal strength (i.e. small amplitude) and low frequency (i.e. long, in fact very long, wavelength).

Where visible light may be 390 nanometres and radio light may be 3 metres in wavelength – gravitational waves are more in the order of 300 kilometres for an average supernova blast, up to 300,000 kilometres for an inspiraling black hole binary and maybe up to 3 billion light years for the primordial echoes of the Big Bang.

So, there’s a fair way to go with all this at a technological level – although proponents (as proponents are want) say that we are on the verge of our first confirmed observation of a gravitational wave – or otherwise they reckon that we have already collected the data, but don’t fully know how to interpret them yet.

This is the current quest of citizen science users of Einstein@Home – the third most popular BOINC distributed computing project after SETI@Home (spot an alien) and Rosetta@Home (fold a protein).

This article follows a public lecture delivered by Kip Thorne at the Australian National University in July 2011 – where he discussed plans for LIGO Australia and also the animated simulations of black hole collisions described in the paper below – which may provide templates to interpret the waveforms that will be detected in the future by gravitational wave observatories.

Further reading: Owen et al (including Thorne, K.) Frame-Dragging Vortexes and Tidal Tendexes Attached to Colliding Black Holes: Visualizing the Curvature of Spacetime.

Astronomy Without A Telescope – Granularity

.
A gamma ray burst offers a rare opportunity to assess the nature of the apparent 'empty space' vacuum that exists between you an it. In GRB 041219A's case, that's 300 million light years of vacuum. Credit: ESA.

[/caption]

The very small wavelength of gamma ray light offers the potential to gain high resolution data about very fine detail – perhaps even detail about the quantum substructure of a vacuum – or in other words, the granularity of empty space.

Quantum physics suggests that a vacuum is anything but empty, with virtual particles regularly popping in and out of existence within Planck instants of time. The proposed particle nature of gravity also requires graviton particles to mediate gravitational interactions. So, to support a theory of quantum gravity we should expect to find evidence of a degree of granularity in the substructure of space-time.

There is a lot of current interest in finding evidence of Lorentz invariance violations – where Lorentz invariance is a fundamental principle of relativity theory – and (amongst other things) requires that the speed of light in a vacuum should always be constant.

Light is slowed when it passes through materials that have a refractive index – like glass or water. However, we don’t expect such properties to be exhibited by a vacuum – except, according to quantum theory, at exceedingly tiny Planck units of scale.

So theoretically, we might expect a light source that broadcasts across all wavelengths – that is, all energy levels – to have the very high energy, very short wavelength portion of its spectrum affected by the vacuum substructure – while the rest of its spectrum isn’t so affected.

There are at least philosophical problems with assigning a structural composition to the vacuum of space, since it then becomes a background reference frame – similar to the hypothetical luminiferous ether which Einstein dismissed the need for by establishing general relativity.

Nonetheless, theorists hope to unify the current schism between large scale general relativity and small scale quantum physics by establishing an evidence-based theory of quantum gravity. It may be that small scale Lorentz invariance violations will be found to exist, but that such violations will become irrelevant at large scales – perhaps as a result of quantum decoherence.

Quantum decoherence might permit the large scale universe to remain consistent with general relativity, but still be explainable by a unifying quantum gravity theory.

The ESA INTEGRAL gamma ray observatory - devoting a proportion of its observing time to searching for the underlying quantum nature of the cosmos. Credit: ESA

On 19 December 2004, the space-based INTEGRAL gamma ray observatory detected Gamma Ray Burst GRB 041219A, one of the brightest such bursts on record. The radiative output of the gamma ray burst showed indications of polarisation – and we can be confident that any quantum level effects were emphasised by the fact that the burst occurred in a different galaxy and the light from it has travelled through more than 300 million light years of vacuum to reach us.

Whatever extent of polarisation that can be attributed to the substructure of the vacuum, would only be visible in the gamma ray portion of the light spectrum – and it was found that the difference between polarisation of the gamma ray wavelengths and the rest of the spectrum was… well, undetectable.

The authors of a recent paper on the INTEGRAL data claim it achieved resolution down to Planck scales, being 10-35 metres. Indeed, INTEGRAL’s observations constrain the possibility of any quantum granularity down to a level of 10-48 metres or smaller.

Elvis might not have left the building, but the authors claim that this finding should have a major impact on current theoretical options for a quantum gravity theory – sending quite a few theorists back to the drawing board.

Further reading: Laurent et al. Constraints on Lorentz Invariance Violation using INTEGRAL/IBIS observations of GRB041219A.

ESA media release

Where Did Early Cosmic Dust Come From? New Research Says Supernovae

A new study from the University of Edinburgh argues that life could be spread throughout the cosmos by interstellar dust. Credit: ESA/NASA-JPL/UCL/STScI

[/caption]

From a JPL Press Release:

New observations from the infrared Herschel Space Observatory reveal that an exploding star expelled the equivalent of between 160,000 and 230,000 Earth masses of fresh dust. This enormous quantity suggests that exploding stars, called supernovae, are the answer to the long-standing puzzle of what supplied our early universe with dust.

“This discovery illustrates the power of tackling a problem in astronomy with different wavelengths of light,” said Paul Goldsmith, the NASA Herschel project scientist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., who is not a part of the current study. “Herschel’s eye for longer-wavelength infrared light has given us new tools for addressing a profound cosmic mystery.”

Cosmic dust is made of various elements, such as carbon, oxygen, iron and other atoms heavier than hydrogen and helium. It is the stuff of which planets and people are made, and it is essential for star formation. Stars like our sun churn out flecks of dust as they age, spawning new generations of stars and their orbiting planets.

Astronomers have for decades wondered how dust was made in our early universe. Back then, sun-like stars had not been around long enough to produce the enormous amounts of dust observed in distant, early galaxies. Supernovae, on the other hand, are the explosions of massive stars that do not live long.

The new Herschel observations are the best evidence yet that supernovae are, in fact, the dust-making machines of the early cosmos.

This plot shows energy emitted from a supernova remnant called SN 1987A. Previously, NASA's Spitzer Space Telescope detected warm dust around the object. Image credit: ESA/NASA-JPL/UCL/STScI

“The Earth on which we stand is made almost entirely of material created inside a star,” explained the principal investigator of the survey project, Margaret Meixner of the Space Telescope Science Institute, Baltimore, Md. “Now we have a direct measurement of how supernovae enrich space with the elements that condense into the dust that is needed for stars, planets and life.”

The study, appearing in the July 8 issue of the journal Science, focused on the remains of the most recent supernova to be witnessed with the naked eye from Earth. Called SN 1987A, this remnant is the result of a stellar blast that occurred 170,000 light-years away and was seen on Earth in 1987. As the star blew up, it brightened in the night sky and then slowly faded over the following months. Because astronomers are able to witness the phases of this star’s death over time, SN 1987A is one of the most extensively studied objects in the sky.

A new view from the Hubble Space Telescope shows how supernova 1987A has recently brightened.

Initially, astronomers weren’t sure if the Herschel telescope could even see this supernova remnant. Herschel detects the longest infrared wavelengths, which means it can see very cold objects that emit very little heat, such as dust. But it so happened that SN 1987A was imaged during a Herschel survey of the object’s host galaxy — a small neighboring galaxy called the Large Magellanic Cloud (it’s called large because it’s bigger than its sister galaxy, the Small Magellanic Cloud).

After the scientists retrieved the images from space, they were surprised to see that SN 1987A was aglow with light. Careful calculations revealed that the glow was coming from enormous clouds of dust — consisting of 10,000 times more material than previous estimates. The dust is minus 429 to minus 416 degrees Fahrenheit (about minus 221 to 213 Celsius) — colder than Pluto, which is about minus 400 degrees Fahrenheit (204 degrees Celsius).

“Our Herschel discovery of dust in SN 1987A can make a significant understanding in the dust in the Large Magellanic Cloud,” said Mikako Matsuura of University College London, England, the lead author of the Science paper. “In addition to the puzzle of how dust is made in the early universe, these results give us new clues to mysteries about how the Large Magellanic Cloud and even our own Milky Way became so dusty.”

Previous studies had turned up some evidence that supernovae are capable of producing dust. For example, NASA’s Spitzer Space Telescope, which detects shorter infrared wavelengths than Herschel, found 10,000 Earth-masses worth of fresh dust around the supernova remnant called Cassiopea A. Hershel can see even colder material, and thus the coldest reservoirs of dust. “The discovery of up to 230,000 Earths worth of dust around SN 1987A is the best evidence yet that these monstrous blasts are indeed mighty dust makers,” said Eli Dwek, a co-author at NASA Goddard Space Flight Center in Greenbelt, Md.

Herschel is led by the European Space Agency with important contributions from NASA.

See also the ESA press release on this research.

Astronomy Without A Telescope – Big Rips And Little Rips

The concept of accelerating expansion does get you wondering just how much it can accelerate. Theorists think there still might be a chance of a big crunch, a steady-as-she-goes expansion or a big rip. Or maybe just a little rip?

[/caption]

One of a number of seemingly implausible features of dark energy is that its density is assumed to be constant over time. So, even though the universe expands over time, dark energy does not become diluted, unlike the rest of the contents of the universe.

As the universe expands, it seems that more dark energy appears out of nowhere to sustain the constant dark energy density of the universe. So, as times goes by, dark energy will become an increasingly dominant proportion of the observable universe – remembering that it is already estimated as being 73% of it.

An easy solution to this is to say that dark energy is a feature inherent in the fabric of space-time, so that as the universe expands and the expanse of space-time increases, so dark energy increases and its density remains constant. And this is fine, as long as we then acknowledge that it isn’t really energy – since our otherwise highly reliable three laws of thermodynamics don’t obviously permit energy to behave in such ways.

An easy solution to explain the uniform acceleration of the universe’s expansion is to propose that dark energy has the feature of negative pressure – where negative pressure is a feature inherent in expansion.

Applying this arcane logic to observation, the observed apparent flatness of the universe’s geometry suggests that the ratio of dark energy pressure to dark energy density is approximately 1, or more correctly -1, since we are dealing with a negative pressure. This relationship is known as the equation of state for dark energy.

In speculating about what might happen in the universe’s future, an easy solution is to assume that dark energy is just whatever it is – and that this ratio of pressure to density will be sustained at -1 indefinitely, whatever the heck that means.

But cosmologists are rarely happy to just leave things there and have speculated on what might happen if the equation of state does not stay at -1.

Three scenarios for a future driven by dark energy - its density declines over time, it stays the same or its density increases, tearing the contents of the universe to bits. If you are of the view that dark energy is just a mathematical artifact that grows as the expanse of space-time increases - then the cosmological constant option is for you.

If dark energy density decreased over time, the acceleration rate of universal expansion would decline and potentially cease if the pressure/density ratio reached -1/3. On the other hand, if dark energy density increased and the pressure/density ratio dropped below -1 (that is, towards -2, or -3 etc), then you get phantom energy scenarios. Phantom energy is a dark energy which has its density increasing over time. And let’s pause here to remember that the Phantom (ghost who walks) is a fictional character.

Anyhow, as the universe expands and we allow phantom energy density to increase, it potentially approaches infinite within a finite period of time, causing a Big Rip, as the universe becomes infinite in scale and all bound structures, all the way down to subatomic particles, are torn apart. At a pressure/density ratio of just -1.5, this scenario could unfold over a mere 22 billion years.

Frampton et al propose an alternative Little Rip scenario, where the pressure/density ratio is variable over time so that bound structures are still torn apart but the universe does not become infinite in scale.

This might support a cyclic universe model – since it gets you around problems with entropy. A hypothetical Big Bang – Big Crunch cyclic universe has an entropy problem since free energy is lost as everything becomes gravitationally bound – so that you just end up with one huge black hole at the end of the Crunch.

A Little Rip potentially gives you an entropy reboot, since everything is split apart and so can progress from scratch through the long process of being gravitationally bound all over again – generating new stars and galaxies in the process.

Anyhow, Sunday morning – time for a Big Brunch.

Further reading: Frampton et al. The Little Rip.