Antimatter/Dark Matter Hunter Ready to be Installed on Space Station

The Alpha Magnetic Spectrometer arrives at Kennedy Space Center. Credit: Alan Walters (awaltersphoto.com) for Universe Today.

[/caption]

One of the most anticipated science instruments for the International Space Station — which could find the “hidden universe” of anti matter and dark matter — has arrived at Kennedy Space Center. The Alpha Magnetic Spectrometer (AMS-02) is now ready to head to space as part of what is currently the last scheduled space shuttle mission in February 2011. Dubbed “The Antimatter Hunter,” the AMS is the largest scientific instrument to be installed on the ISS, and comes as a result of the largest international collaboration for a single experiment in space.

“Even before its launch, the AMS-02 has already been hailed is already as a success. Today we can see in it with more than a decade of work and cooperation between 56 institutes from 16 different countries,” said Simonetta Di Pippo, ESA Director of Human Spaceflight.

AMS measures the “fingerprints” of astrophysical objects in high-energy particles, and will study the sources of cosmic rays — from ordinary things like stars and supernovae, as well as perhaps more exotic sources like quark stars, dark-matter annihilations, and galaxies made entirely of antimatter.

AMS moved to transport vehicle. Credit: Alan Walters (awaltersphoto.com) for Universe Today.

Each astrophysical source emits a particular type of cosmic rays; the rays migrate through space in all directions, and AMS-02 will detect the ones that pass near Earth. With careful theoretical modeling, the scientists hope to measure those fingerprints.

By observing the hidden parts of the Universe, AMS will help scientists to better understand better the fundamental issues on the origin and structure of the Universe. With a magnetic field 4,000 times stronger than the magnetic field of the Earth, this state-of-the-art particle physics detector will examine directly from space each particle passing through it in a program that is complementary to that of the Large Hadron Collider. So, not only are astronomers eagerly waiting for data, but particle physicists as well.

Samuel Ting. Credit: Alan Walters (awaltersphoto.com) for Universe Today.

The AMS-02 experiment is led by Nobel Prize Laureate Samuel Ting of the Massachusetts Institute of Technology (MIT). The experiment is expected to remain active for the entire lifetime of the ISS and will not return back to Earth. The launch of the instrument was delayed so that the original superconducting magnet could be replace with a permanent one with a longer life expectancy.

Now as KSC, the AMS will be installed in a clean room for more tests. In a few weeks, the detector will be moved to the Space Shuttle, ready for its last mission.

The shuttle crew for STS-134 was on hand to welcome the AMS-02. Credit: Alan Walters (awaltersphoto.com) for Universe Today

The AMS-02 is an experiment that we hope we’ll be doing lots of reporting about in the future!

Source: ESA

Astronomy Without A Telescope – A Universe Free Of Charge?

(Caption) When you weigh up all the positives and the negatives, does the universe still have a net charge of zero?

[/caption]

If there were equal amounts of matter and anti-matter in the universe, it would be easy to deduce that the universe has a net charge of zero, since a defining ‘opposite’ of matter and anti-matter is charge. So if a particle has charge, its anti-particle will have an equal but opposite charge. For example, protons have a positive charge – while anti-protons have a negative charge.

But it’s not apparent that there is a lot of anti-matter around as neither the cosmic microwave background, nor the more contemporary universe contain evidence of annihilation borders – where contact between regions of large scale matter and large scale anti-matter should produce bright outbursts of gamma rays.

So, since we do apparently live in a matter-dominated universe – the question of whether the universe has a net charge of zero is an open question.

It’s reasonable to assume that dark matter has either a net zero charge – or just no charge at all – simply because it is dark. Charged particles and larger objects like stars with dynamic mixtures of positive and negative charges, produce electromagnetic fields and electromagnetic radiation.

So, perhaps we can constrain the question of whether the universe has a net charge of zero to just asking whether the total sum of all non-dark matter has. We know that most cold, static matter – that is in an atomic, rather than a plasma, form – should have a net charge of zero, since atoms have equal numbers of positively charged protons and negatively charged electrons.

Stars composed of hot plasma might also be assumed to have a net charge of zero, since they are the product of accreted cold, atomic material which has been compressed and heated to create a plasma of dissociated nuclei (+ve) and electrons (-ve).

The principle of charge conservation (which is accredited to Benjamin Franklin) has it that the amount of charge in a system is always conserved, so that the amount flowing in will equal the amount flowing out.

Apollo 15's Lunar Surface Experiments Package (ALSEP). The Moon represents a good vantage point to measure the balance of incoming cosmic rays versus outgoing solar wind.

An experiment which has been suggested to enable measurement of the net charge of the universe, involves looking at the solar system as a charge-conserving system, where the amount flowing in is carried by charged particles in cosmic rays – while the amount flowing out is carried by charged particles in the Sun’s solar wind.

If we then look at a cool, solid object like the Moon, which has no magnetic field or atmosphere to deflect charged particles, it should be possible to estimate the net contribution of charge delivered by cosmic rays and by solar wind. And when the Moon is shadowed by the tail of the Earth’s magnetosphere, it should be possible to detect the flux attributable to just cosmic rays – which should represent the charge status of the wider universe.

Drawing on data collected from sources including Apollo surface experiments, the Solar and Heliospheric Observatory (SOHO), the WIND spacecraft and the Alpha Magnetic Spectrometer flown on a space shuttle (STS 91), the surprising finding is a net overbalance of positive charges arriving from deep space, implying that there is an overall charge imbalance in the cosmos.

Either that or a negative charge flux occurs at energy levels lower than the threshold of measurement that was achievable in this study. So perhaps this study is a bit inconclusive, but the question of whether the universe has a net charge of zero still remains an open question.

Further reading: Simon, M.J. and Ulbricht, J. (2010) Generating an electrical potential on the Moon by cosmic rays and solar wind?

Astronomy Without A Telescope – The Universe Is Not In A Black Hole

Does a spinning massive object wind up spacetime? Credit: J Bergeron / Sky and Telescope Magazine. An APOD for 7 November 1997.

[/caption]

It has been reported that a recent scientific paper delivers the conclusion that our universe resides inside a black hole in another universe. In fact, this isn’t really what the paper concluded – although what the paper did conclude is still a little out of left field.

The Einstein-Cartan-Kibble-Sciama (ECKS) theory of gravity – claimed as an alternative to general relativity theory, although still based on Einstein field equations – seeks to take greater account of the effect of the spin of massive particles. Essentially, while general relativity has it that matter determines how spacetime curves, ECKS also tries to capture the torsion of spacetime, which is a more dynamic idea of curvature – where you have to think in terms of twisting and contortion, rather than just curvature.

Mind you, general relativity is also able to deal with dynamic curvature. ECKS proponents claim that where ECKS departs from general relativity is in situations with very high matter density – such as inside black holes. General relativity suggests that a singularity (with infinite density and zero volume) forms beyond a black hole’s event horizon. This is not a very satisfying result since the contents of black holes do seem to occupy volume – more massive ones have larger diameters than less massive ones – so general relativity may just not be up to the task of dealing with black hole physics.

ECKS theory attempts to step around the singularity problem by proposing that an extreme torsion of spacetime, resulting from the spin of massive particles compressed within a black hole, prevents a singularity from forming. Instead the intense compression increases the intrinsic angular momentum of the matter within (i.e. the spinning skater draws arms in analogy) until a point is reached where spacetime becomes as twisted, or as wound up, as it can get. From that point the tension must be released through an expansion (i.e. an unwinding) of spacetime in a whole new tangential direction – and voila you get a new baby universe.

But the new baby universe can’t be born and expand in the black hole. Remember this is general relativity. From any frame of reference outside the black hole, the events just described cannot sequentially happen. Clocks seem to slow to a standstill as they approach a black hole’s event horizon. It makes no sense for an external observer to imagine that a sequence of events is taking place over time inside a black hole.

Instead, it is proposed that the birth and expansion of new baby universe proceeds along a separate branch of spacetime with the black hole acting as an Einstein-Rosen bridge (i.e. a wormhole).

(Caption) The horizon problem in Big Bang cosmology. How is it that distant parts of the universe possess such similar physical properties? Well (putting your Occam brand razor aside), perhaps the whole contents of this universe was originally homogenized within a black hole from a parallel universe. Credit: Addison Wesley.

If correct, it’s a turtles on turtles solution and we are left to ponder the mystery of the first primeval universe which first formed the black holes from which all subsequent universes originate.

Something the ECKS hypothesis does manage to do is to provide an explanation for cosmic inflation. Matter and energy crunched within a black hole should achieve a state of isotropy and homogeneity (i.e. no wrinkles) – and when it expands into a new universe through a hypothetical wormhole, this is driven by the unwinding of the spacetime torsion that was built up within the black hole. So you have an explanation for why a universe expands – and why it is so isotropic and homogenous.

Despite there not being the slightest bit of evidence to support it, this does rank as an interesting idea.

Further reading: Poplawski, N.J. (2010) Cosmology with torsion – an alternative to cosmic inflation.

Astronomy Without A Telescope – Our Ageing Universe

Active energy transfer - the thing distinguishes a young universe from an old universe. Credit: Gemini observatory.

[/caption]

It all started so full of promise. All at once, our universe burst upon the scene, but much of that initial burst quickly dissipated into background neutrinos and photons – and ever since, pretty much  everything our universe has ever done has just dissipated more energy. So, despite the occasional enthusiastic outburst of supernovae and other celestial extravagances, it’s becoming increasingly apparent that our universe is getting on a bit.

The second law of thermodynamics (the one about entropy) demands that everything goes to pot over time – since anything that happens is an opportunity for energy to be dissipated.

The universe is full of energy and should always remain so, but that energy can only make something interesting happen if there is a degree of thermal disequilibrium. For example, if you take an egg out of the refrigerator and drop it in boiling water, it cooks. A useful and worthwhile activity, even if not a very efficient one – since lots of heat from the stove just dissipates into the kitchen, rather than being retained for the cooking of more eggs.

But, on the other hand, if you drop an already cooked, already heated egg into the same boiling water… well, what’s the point? No useful work is done, nothing of note really happens.

This is roughly the idea behind increasing entropy. Everything of note that happens in the universe involves a transfer of energy and at each such transfer some energy is lost from that system. So, following the second law to its logical conclusion, you eventually end up with a universe in thermal equilibrium with itself. At that point, there are no disequilibrium gradients left to drive energy transfer – or to cook eggs. Essentially, nothing else of note will ever happen again – a state known as heat death.

It’s true that the early universe was initially in thermal equilibrium, but there was also lots of gravitational potential energy. So, matter (both light and dark) ‘clumped’ – creating lots of thermal disequilibrium – and from there all sorts of interesting things were able to happen. But gravity’s ability to contribute useful work to the universe also has its limits.

In a static universe the end point of all this clumping is a collection of black holes – considered to be objects in a state of high entropy, since whatever they contain no longer engages in energy transfer. It just sits there – and, apart from some whispers of Hawking radiation, will just keep sitting there until eventually (in a googol or so years) the black holes evaporate.

The contents of an expanding universe may never achieve a state of maximum entropy since the expansion itself increases the value of maximum entropy for that universe – but you still end up with not much more than a collection of isolated and ageing white dwarfs – which eventually fizzle out and evaporate themselves.

A head count of the contributors to entropy in our universe. Supermassive black holes top the list. Credit: Egan and Lineweaver. (The full paper notes some caveats and recommendations for further work to improve these estimates).

It’s possible to estimate the current entropy of our universe by tallying up its various components – which have varying levels of entropy density. At the top of the scale are black holes – and at the bottom are luminous stars. These stars appear to be locally enthalpic – where for example, the Sun heats the Earth enabling all sorts of interesting things to happen here. But it’s a time-limited process and what the Sun mostly does is to radiate energy away into empty space.

Egan and Lineweaver have recently re-calculated the current entropy of the observable universe – and gained a value that is an order of magnitude higher than previous estimates (albeit we are talking 1×10104 – instead of 1×10103). This is largely the result of incorporating the entropy contributed by recently recognized supermassive black holes – where the entropy of a black hole is proportional to its size.

So this suggests our universe is a bit further down the track towards heat death than we had previously thought. Enjoy it while you can.

Further reading: Egan, C.A. and Lineweaver, C.H. (2010) A Larger Estimate of the Entropy of the Universe http://arxiv.org/abs/0909.3983

Cosmologists Provide Closest Measure of Elusive Neutrino

Slices through the SDSS 3-dimensional map of the distribution of galaxies. Earth is at the center, and each point represents a galaxy, typically containing about 100 billion stars. Galaxies are colored according to the ages of their stars, with the redder, more strongly clustered points showing galaxies that are made of older stars. The outer circle is at a distance of two billion light years. The region between the wedges was not mapped by the SDSS because dust in our own Galaxy obscures the view of the distant universe in these directions. Both slices contain all galaxies within -1.25 and 1.25 degrees declination. Credit: M. Blanton and the Sloan Digital Sky Survey.

[/caption]

Cosmologists – and not particle physicists — could be the ones who finally measure the mass of the elusive neutrino particle. A group of cosmologists have made their most accurate measurement yet of the mass of these mysterious so-called “ghost particles.” They didn’t use a giant particle detector but used data from the largest survey ever of galaxies, the Sloan Digital Sky Survey. While previous experiments had shown that neutrinos have a mass, it is thought to be so small that it was very hard to measure. But looking at the Sloan data on galaxies, PhD student Shawn Thomas and his advisers at University College London put the mass of a neutrino at no greater than 0.28 electron volts, which is less than a billionth of the mass of a single hydrogen atom. This is one of the most accurate measurements of the mass of a neutrino to date.

Their work is based on the principle that the huge abundance of neutrinos (there are trillions passing through you right now) has a large cumulative effect on the matter of the cosmos, which naturally forms into “clumps” of groups and clusters of galaxies. As neutrinos are extremely light they move across the universe at great speeds which has the effect of smoothing this natural “clumpiness” of matter. By analysing the distribution of galaxies across the universe (i.e. the extent of this “smoothing-out” of galaxies) scientists are able to work out the upper limits of neutrino mass.

A neutrino is capable of passing through a light year –about six trillion miles — of lead without hitting a single atom.

Central to this new calculation is the existence of the largest ever 3D map of galaxies, called Mega Z, which covers over 700,000 galaxies recorded by the Sloan Digital Sky Survey and allows measurements over vast stretches of the known universe.

“Of all the hypothetical candidates for the mysterious Dark Matter, so far neutrinos provide the only example of dark matter that actually exists in nature,” said Ofer Lahav, Head of UCL’s Astrophysics Group. “It is remarkable that the distribution of galaxies on huge scales can tell us about the mass of the tiny neutrinos.”

The Cosmologists at UCL were able to estimate distances to galaxies using a new method that measures the colour of each of the galaxies. By combining this enormous galaxy map with information from the temperature fluctuations in the after-glow of the Big Bang, called the Cosmic Microwave Background radiation, they were able to put one of the smallest upper limits on the size of the neutrino particle to date.

“Although neutrinos make up less than 1% of all matter they form an important part of the cosmological model,” said Dr. Shaun Thomas. “It’s fascinating that the most elusive and tiny particles can have such an effect on the Universe.”

“This is one of the most effective techniques available for measuring the neutrino masses,” said Dr. Filipe Abadlla. “This puts great hopes to finally obtain a measurement of the mass of the neutrino in years to come.”

The authors are confident that a larger survey of the Universe, such as the one they are working on called the international Dark Energy Survey, will yield an even more accurate weight for the neutrino, potentially at an upper limit of just 0.1 electron volts.
The results are published in the journal Physical Review Letters.

Source: University College London

Astronomy Without A Telescope – Is Time Real?

Time is an illusion caused by the passage of history (Douglas Adams 1952-2001).

The way that we deal with time is central to a major current schism in physics. Under classic Newtonian physics and also quantum mechanics – time is absolute, a universal metronome allowing you determine whether events occur simultaneously or in sequence. Under Einstein’s physics, time is not absolute – simultaneity and sequence depend on who’s looking. For Einstein, the speed of light (in a vacuum) is constant and time changes in whatever way is required to keep the speed of light constant from all frames of reference.

Under general relativity (GR) you are able to experience living for three score and ten years regardless of where you are or how fast you’re moving, but other folk might measure that duration quite differently. But even under GR, we need to consider whether time only has meaning for sub-light speed consciousnesses such as us. Were a photon to have consciousness, it may not experience time – and, from its perspective, would cross the apparent 100,000 light year diameter of the Milky Way in an instant. Of course, that gets you wondering whether space is real either. Hmm…

Quantum mechanics does (well, sometimes) require absolute time – most obviously in regards to quantum entanglement where determining the spin of one particle, determines the spin of its entangled partner instantaneously and simultaneously. Leaving aside the baffling conundrums imposed by this instantaneous action over a distance – the simultaneous nature of the event implies the existence of absolute time.

In one attempt to reconcile GR and quantum mechanics, time disappears altogether – from the Wheeler-DeWitt equation for quantum gravity – not that many regard this as a 100% successful attempt to reconcile GR and quantum mechanics. Nonetheless, this line of thinking highlights the ‘problem of time’ when trying to develop a Theory of Everything.

The winning entries for a 2008 essay competition on the nature of time run by the Fundamental Questions Institute could be roughly grouped into the themes ‘time is real’, ‘no, it isn’t’ and ‘either way, it’s useful so you can cook dinner.’

The ‘time isn’t real’ camp runs the line that time is just a by-product of what the universe does (anything from the Earth rotating to the transition of a Cesium atom – i.e. the things that we calibrate our clocks to).

How a return to equilibrium after a random downward fluctuation in entropy might appear. First there was light, then a whole bunch of stuff happened and then it started getting cold and dark and empty.

Time is the fire in which we burn (Soran, Star Trek bad guy, circa 24th century).

‘Time isn’t real’ proponents also refer to Boltzmann’s attempt to trivialise the arrow of time by proposing that we just live in a local pocket of the universe where there has been a random downward fluctuation of entropy – so that the perceived forward arrow of time is just a result of the universe returning to equilibrium – being a state of higher entropy where it’s very cold and most of the transient matter that we live our lives upon has evaporated. It is conceivable that another different type of fluctuation somewhere else might just as easily result in the arrow pointing the other way.

Nearly everyone agrees that time probably doesn’t exist outside our Big Bang universe and the people who just want to get on and cook dinner suggest we might concede that space-time could be an emergent property of quantum mechanics. With that settled, we just need to rejig the math – over coffee maybe.

I was prompted to write this after reading a Scientific American June 2010 article, Time Is An Illusion by Craig Callender.

Team Finds Most-Distant Galaxy Cluster Ever Seen

SXDF-XCLJ0218-0510. Max-Planck-Institut für extraterrestrische Physik

[/caption]

Like a location from Star Wars, this galaxy cluster is far, far away and with origins a long, long time ago. With the ungainly name of SXDF-XCLJ0218-0510, this cluster is actually the most distant cluster of galaxies ever seen. It is a whopping 9.6 billion light years away, and X-ray and infrared observations show that the cluster hosts predominantly old, massive galaxies. This means the galaxies formed when the universe was still very young, so finding this cluster and being able to see it is providing new information not only about early galaxy evolution but also about history of the universe as a whole.

An international team of astronomers from the Max Planck Institute for Extraterrestrial Physics, the University of Tokyo and the Kyoto University discovered this cluster using the Subaru telescope along with the XMM-Newton space observatory to look in different wavelengths.

Using the Multi-Object Infrared Camera and Spectrometer (MOIRCS) on the Subaru telescope, the team was able to look in near-infrared wavelengths, where the galaxies are most luminous.

“The MOIRCS instrument has an extremely powerful capability of measuring distances to galaxies. This is what made our challenging observation possible,” said Masayuki Tanaka from the University of Tokyo. “Although we confirmed only several massive galaxies at that distance, there is convincing evidence that the cluster is a real, gravitationally bound cluster.”

Like a contour map, the arrows in the image above indicate galaxies that are likely located at the same distance, clustered around the center of the image. The contours indicate the X-ray emission of the cluster. Galaxies with confirmed distance measurements of 9.6 billion light years are circled. The combination of the X-ray detection and the collection of massive galaxies unequivocally proves a real, gravitationally bound cluster.

That the individual galaxies are indeed held together by gravity is confirmed by observations in a very different wavelength regime: The matter between the galaxies in clusters is heated to extreme temperatures and emits light at much shorter wavelengths than visible to the human eye. The team therefore used the XMM-Newton space observatory to look for this radiation in X-rays.

“Despite the difficulties in collecting X-ray photons with a small effective telescope size similar to the size of a backyard telescope, we detected a clear signature of hot gas in the cluster,” said Alexis Finoguenov from the Max Planck Institute for Extraterrestrial Physics.

The combination of these different observations in what are invisible wavelengths to the human eye led to the pioneering discovery of the galaxy cluster at a distance of 9.6 billion light years – some 400 million light years further into the past than the previously most distant cluster known.

An analysis of the data collected about the individual galaxies shows that the cluster contains already an abundance of evolved, massive galaxies that formed some two billion years earlier. As the dynamical processes for galaxy aging are slow, presence of these galaxies requires the cluster assembly through merger of massive galaxy groups, each nourishing its dominant galaxy. The cluster is therefore an ideal laboratory for studying the evolution of galaxies, when the universe was only about a third of its present age.

As distant galaxy clusters are also important tracers of the large scale structure and primordial density fluctuations in the universe, similar observations in the future will lead to important information for cosmologists. The results obtained so far demonstrate that current near infrared facilities are capable of providing a detailed analysis of distant galaxy populations and that the combination with X-ray data is a powerful new tool. The team therefore is continuing the search for more distant clusters.

Source: Max Planck Institute for Extraterrestrial Physics

New Images from Planck Reveal Star Formation Processes

An active star-formation region in the Orion Nebula, as seen By Planck. Credits: ESA/LFI & HFI Consortia

[/caption]
While most newborn stars are hidden beneath a blanket of gas and dust, the Planck space observatory – with its microwave eyes – can peer beneath that shroud to provide new insights into star formation. The latest images released by the Planck team bring to light two different star forming regions in the Milky Way, and in stunning detail, reveal the different physical processes at work.

“Seeing” across nine different wavelengths, Planck took at look at star forming regions in the constellations of Orion and Perseus. The top image shows the interstellar medium in a region of the Orion Nebula where stars are actively forming in large numbers. “The power of Planck’s very wide wavelength coverage is immediately apparent in these images,” said Peter Ade of Cardiff University, co-Investigator on Planck. “The red loop seen here is Barnard’s Loop, and the fact that it is visible at longer wavelengths tells us that it is emitted by hot electrons, and not by interstellar dust. The ability to separate the different emission mechanisms is key for Planck’s primary mission.”

A comparable sequence of images, below, showing a region where fewer stars are forming near the constellation of Perseus, illustrates how the structure and distribution of the interstellar medium can be distilled from the images obtained with Planck.

This sequence of images, showing a region where fewer stars are forming near the constellation of Perseus, illustrates how the structure and distribution of the interstellar medium can be distilled from the images obtained with Planck. Credit: ESA / HFI and LFI Consortia

At wavelengths where Planck’s sensitive instruments observe, the Milky Way emits strongly over large areas of the sky. This emission arises primarily from four processes, each of which can be isolated using Planck. At the longest wavelengths, of about a centimeter, Planck maps the distribution of synchrotron emission due to high-speed electrons interacting with the magnetic fields of our Galaxy. At intermediate wavelengths of a few millimeters the emission is dominated by ionized gas being heated by newly formed stars. At the shortest wavelengths, of around a millimeter and below, Planck maps the distribution of interstellar dust, including the coldest compact regions in the final stages of collapse towards the formation of new stars.

“The real power of Planck is the combination of the High and Low Frequency Instruments which allow us, for the first time, to disentangle the three foregrounds,” said Professor Richard Davis of the University of Manchester’s Jodrell Bank Centre for Astrophysics. “This is of interest in its own right but also enables us to see the Cosmic Microwave Background far more clearly.”

Once formed, the new stars disperse the surrounding gas and dust, changing their own environment. A delicate balance between star formation and the dispersion of gas and dust regulates the number of stars that any given galaxy makes. Many physical processes influence this balance, including gravity, the heating and cooling of gas and dust, magnetic fields and more. As a result of this interplay, the material rearranges itself into ‘phases’ which coexist side-by-side. Some regions, known as ‘molecular clouds,’ contain dense gas and dust, while others, referred to as ‘cirrus’ (which look like the wispy clouds we have here on Earth), contain more diffuse material.

Location of the Planck images in Orion and Perseus. ESA / HFI and LFI Consortia, STSci/DSS/IRAS (background image)

Since Planck can look across such a wide range of frequencies, it can, for the first time, provide data simultaneously on all the main emission mechanisms. Planck’s wide wavelength coverage, which is required to study the Cosmic Microwave Background, proves also to be crucial for the study of the interstellar medium.

“The Planck maps are really fantastic to look at,” said Dr. Clive Dickinson, also of the University of Manchester. “These are exciting times.”

Planck maps the sky with its High Frequency Instrument (HFI), which includes the frequency bands 100-857 GHz (wavelengths of 3mm to 0.35mm), and the Low Frequency Instrument (LFI) which includes the frequency bands 30-70 GHz (wavelengths of 10mm to 4mm).

The Planck team will complete its first all-sky survey in mid-2010), and the spacecraft will continue to gather data until the end of 2012, during which time it will complete four sky scans. To arrive at the main cosmology results will require about two years of data processing and analysis. The first set of processed data will be made available to the worldwide scientific community towards the end of 2012.

Source: ESA and Cardiff University

GOODS, Under Astronomers’ AEGIS, Produce GEMS

No, not really (but I got all three key words into the title in a way that sorta makes sense).

Astronomers, like most scientists, just love acronyms; unfortunately, like most acronyms, on their own the ones astronomers use make no sense to non-astronomers.

And sometimes not even when written in full:
GOODS = Great Observatories Origins Deep Survey; OK that’s vaguely comprehensible (but what ‘origins’ is it about?)
AEGIS = All-wavelength Extended Groth strip International Survey; hmm, what’s a ‘Groth’?
GEMS = Galaxy Evolution from Morphology and SEDs; is Morphology the study of Morpheus’ behavior? And did you guess that the ‘S’ stood for ‘SEDs’ (not ‘Survey’)?

But, given that these all involve a ginormous amount of the ‘telescope time’ of the world’s truly great observatories, to produce such visually stunning images as the one below (NOT!), why do astronomers do it?

GEMS tile#58 (MPIfA)


Astronomy has made tremendous progress in the last century, when it comes to understanding the nature of the universe in which we live.

As late as the 1920s there was still debate about the (mostly faint) fuzzy patches that seemed to be everywhere in the sky; were the spiral-shaped ones separate ‘island universes’, or just funny blobs of gas and dust like the Orion nebula (‘galaxy’ hadn’t been invented then)?

Today we have a powerful, coherent account of everything we see in the night sky, no matter whether we use x-ray eyes, night vision (infrared), or radio telescopes, an account that incorporates the two fundamental theories of modern physics, general relativity and quantum theory. We say that all the stars, emission and absorption nebulae, planets, galaxies, supermassive black holes (SMBHs), gas and plasma clouds, etc formed, directly or indirectly, from a nearly uniform, tenuous sea of hydrogen and helium gas about 13.4 billion years ago (well, maybe the SMBHs didn’t). This is the ‘concordance LCDM cosmological model’, known popularly as ‘the Big Bang Theory’.

But how? How did the first stars form? How did they come together to form galaxies? Why did some galaxies’ nuclei ‘light up’ to form quasars (and others didn’t)? How did the galaxies come to have the shapes we see? … and a thousand other questions, questions which astronomers hope to answer, with projects like GOODS, AEGIS, and GEMS.

The basic idea is simple: pick a random, representative patch of sky and stare at it, for a very, very long time. And do so with every kind of eye you have (but most especially the very sharp ones).

By staring across as much of the electromagnetic spectrum as possible, you can make a chart (or graph) of the amount of energy is coming to us from each part of that spectrum, for each of the separate objects you see; this is called the spectral energy distribution, or SED for short.

By breaking the light of each object into its rainbow of colors – taking a spectrum, using a spectrograph – you can find the tell-tale lines of various elements (and from this work out a great deal about the physical conditions of the material which emitted, or absorbed, the light); “light” here is shorthand for electromagnetic radiation, though mostly ultraviolet, visible light (which astronomers call ‘optical’), and infrared (near, mid, and far).

By taking really, really sharp images of the objects you can classify, categorize, and count them by their shape, morphology in astronomer-speak.

And because the Hubble relationship gives you an object’s distance once you know its redshift, and as distance = time, sorting everything by redshift gives you a picture of how things have changed over time, ‘evolution’ as astronomers say (not to be confused with the evolution Darwin made famous, which is a very different thing).

GOODS

The great observatories are Chandra, XMM-Newton, Hubble, Spitzer, and Herschel (space-based), ESO-VLT (European Southern Observatory Very Large Telescope), Keck, Gemini, Subaru, APEX (Atacama Pathfinder Experiment), JCMT (James Clerk Maxwell Telescope), and the VLA. Some of the observing commitments are impressive, for example over 2 million seconds using the ISAAC instrument (doubly impressive considering that ground-based facilities, unlike space-based ones, can only observe the sky at night, and only when there is no Moon).

There are two GOODS fields, called GOODS-North and GOODS-South. Each is a mere 150 square arcminutes in size, which is tiny, tiny, tiny (you need five fields this size to completely cover the Moon)! Of course, some of the observations extend beyond the two core 150 square arcminutes fields, but every observatory covered every square arcsecond of either field (or, for space-based observatories, both).

GOODS-N ACS fields (GOODS/STScI)

GOODS-N is centered on the Hubble Deep Field (North is understood; this is the first HDF), at 12h 36m 49.4000s +62d 12′ 58.000″ J2000.
GOODS-S ACS fields (GOODS/STScI)

GOODS-S is centered on the Chandra Deep Field-South (CDFS), at 3h 32m 28.0s -27d 48′ 30″ J2000.

The Hubble observations were taken using the ACS (Advanced Camera for Surveys), in four wavebands (bandpasses, filters), which are approximately the astronomers’ B, V, i, and z.

Extended Groth Strip fields (AEGIS)

AEGIS

The ‘Groth’ refers to Edward J. Groth who is currently at the Physics Department of Princeton University. In 1995 he presented a ‘poster paper’ at the 185th meeting of the American Astronomical Society entitled “A Survey with the HST“. The Groth strip is the 28 pointings of the Hubble’s WFPC2 camera in 1994, centered on 14h 17m +52d 30′. The Extended Groth Strip (EGS) is considerably bigger than the GOODS fields, combined. The observatories which have covered the EGS include Chandra, GALEX, the Hubble (both NICMOS and ACS, in addition to WFPC2), CFHT, MMT, Subaru, Palomar, Spitzer, JCMT, and the VLA. The total area covered is 0.5 to 1 square degree, though the Hubble observations cover only ~0.2 square degrees (and only 0.0128 for the NICMOS ones). Only two filters were used for the ACS observations (approximately V and I).

I guess you, dear reader, can work out why this is called an ‘All wavelength’ and ‘International Survey’, can’t you?

GEMS' ACS fields (MPIfA)

GEMS

GEMS is centered on the CDFS (Chandra Deep Field-South, remember?), but covers a much bigger area than GOODS-S, 900 square arcminutes (the largest contiguous field so far imaged by the Hubble at the time, circa 2004; the COSMOS field is certainly larger, but most of it is monochromatic – I band only – so the GEMS field is the largest contiguous color one, to date). It is a mosaic of 81 ACS pointings, using two filters (approximately V and z).

Its SEDs component comes largely from the results of a previous large project covering the same area, called COMBO-17 (Classifying Objects by Medium-Band Observations – a spectrophotometric 17-band survey).

Sources: GOODS (STScI), GOODS (ESO), AEGIS, GEMS, ADS
Special thanks to reader nedwright for catching the error re GEMS (and thanks to to readers who have emailed me with your comments and suggestions; much appreciated)

Magnetic Fields in Inter-cluster Space: Measured at Last

How Does Light Travel?

[/caption]
The strength of the magnetic fields here on Earth, on the Sun, in inter-planetary space, on stars in our galaxy (the Milky Way; some of them anyway), in the interstellar medium (ISM) in our galaxy, and in the ISM of other spiral galaxies (some of them anyway) have been measured. But there have been no measurements of the strength of magnetic fields in the space between galaxies (and between clusters of galaxies; the IGM and ICM).

Up till now.

But who cares? What scientific importance does the strength of the IGM and ICM magnetic fields have?

The Large Area Telescope (LAT) on Fermi detects gamma-rays through matter (electrons) and antimatter (positrons) they produce after striking layers of tungsten. Credit: NASA/Goddard Space Flight Center Conceptual Image Lab

Estimates of these fields may provide “a clue that there was some fundamental process in the intergalactic medium that made magnetic fields,” says Ellen Zweibel, a theoretical astrophysicist at the University of Wisconsin, Madison. One “top-down” idea is that all of space was somehow left with a slight magnetic field soon after the Big Bang – around the end of inflation, Big Bang Nucleosynthesis, or decoupling of baryonic matter and radiation – and this field grew in strength as stars and galaxies amassed and amplified its intensity. Another, “bottom-up” possibility is that magnetic fields formed initially by the motion of plasma in small objects in the primordial universe, such as stars, and then propagated outward into space.

So how do you estimate the strength of a magnetic field, tens or hundreds of millions of light-years away, in regions of space a looong way from any galaxies (much less clusters of galaxies)? And how do you do this when you expect these fields to be much less than a nanoGauss (nG), perhaps as small as a femtoGauss (fG, which is a millionth of a nanoGauss)? What trick can you use??

A very neat one, one that relies on physics not directly tested in any laboratory, here on Earth, and unlikely to be so tested during the lifetime of anyone reading this today – the production of positron-electron pairs when a high energy gamma ray photon collides with an infrared or microwave one (this can’t be tested in any laboratory, today, because we can’t make gamma rays of sufficiently high energy, and even if we could, they’d collide so rarely with infrared light or microwaves we’d have to wait centuries to see such a pair produced). But blazars produce copious quantities of TeV gamma rays, and in intergalactic space microwave photons are plentiful (that’s what the cosmic microwave background – CMB – is!), and so too are far infrared ones.

MAGIC telescope (Credit: Robert Wagner)

Having been produced, the positron and electron will interact with the CMB, local magnetic fields, other electrons and positrons, etc (the details are rather messy, but were basically worked out some time ago), with the net result that observations of distant, bright sources of TeV gamma rays can set lower limits on the strength of the IGM and ICM through which they travel. Several recent papers report results of such observations, using the Fermi Gamma-Ray Space Telescope, and the MAGIC telescope.

So how strong are these magnetic fields? The various papers give different numbers, from greater than a few tenths of a femtoGauss to greater than a few femtoGauss.

“The fact that they’ve put a lower bound on magnetic fields far out in intergalactic space, not associated with any galaxy or clusters, suggests that there really was some process that acted on very wide scales throughout the universe,” Zweibel says. And that process would have occurred in the early universe, not long after the Big Bang. “These magnetic fields could not have formed recently and would have to have formed in the primordial universe,” says Ruth Durrer, a theoretical physicist at the University of Geneva.

So, perhaps we have yet one more window into the physics of the early universe; hooray!

Sources: Science News, arXiv:1004.1093, arXiv:1003.3884