Guest Post: The Cosmic Energy Inventory

The Cosmic Energy Inventory chart by Markus Pössel. Click for larger version.

[/caption]

Now that the old year has drawn to a close, it’s traditional to take stock. And why not think big and take stock of everything there is?

Let’s base our inventory on energy. And as Einstein taught us that energy and mass are equivalent, that means automatically taking stock of all the mass that’s in the universe, as well – including all the different forms of matter we might be interested in.

Of course, since the universe might well be infinite in size, we can’t simply add up all the energy. What we’ll do instead is look at fractions: How much of the energy in the universe is in the form of planets? How much is in the form of stars? How much is plasma, or dark matter, or dark energy?


The chart above is a fairly detailed inventory of our universe. The numbers I’ve used are from the article The Cosmic Energy Inventory by Masataka Fukugita and Jim Peebles, published in 2004 in the Astrophysical Journal (vol. 616, p. 643ff.). The chart style is borrowed from Randall Munroe’s Radiation Dose Chart over at xkcd.

These fractions will have changed a lot over time, of course. Around 13.7 billion years ago, in the Big Bang phase, there would have been no stars at all. And the number of, say, neutron stars or stellar black holes will have grown continuously as more and more massive stars have ended their lives, producing these kinds of stellar remnants. For this chart, following Fukugita and Peebles, we’ll look at the present era. What is the current distribution of energy in the universe? Unsurprisingly, the values given in that article come with different uncertainties – after all, the authors are extrapolating to a pretty grand scale! The details can be found in Fukugita & Peebles’ article; for us, their most important conclusion is that the observational data and their theoretical bases are now indeed firm enough for an approximate, but differentiated and consistent picture of the cosmic inventory to emerge.

Let’s start with what’s closest to our own home. How much of the energy (equivalently, mass) is in the form of planets? As it turns out: not a lot. Based on extrapolations from what data we have about exoplanets (that is, planets orbiting stars other than the sun), just one part-per-million (1 ppm) of all energy is in the form of planets; in scientific notation: 10-6. Let’s take “1 ppm” as the basic unit for our first chart, and represent it by a small light-green square. (Fractions of 1 ppm will be represented by partially filled such squares.) Here is the first box (of three), listing planets and other contributions of about the same order of magnitude:

So what else is in that box? Other forms of condensed matter, mainly cosmic dust, account for 2.5 ppm, according to rough extrapolations based on observations within our home galaxy, the Milky Way. Among other things, this is the raw material for future planets!

For the next contribution, a jump in scale. To the best of our knowledge, pretty much every galaxy contains a supermassive black hole (SMBH) in its central region. Masses for these SMBHs vary between a hundred thousand times the mass of our Sun and several billion solar masses. Matter falling into such a black hole (and getting caught up, intermittently, in super-hot accretion disks swirling around the SMBHs) is responsible for some of the brightest phenomena in the universe: active galaxies, including ultra high-powered quasars. The contribution of matter caught up in SMBHs to our energy inventory is rather modest, though: about 4 ppm; possibly a bit more.

Who else is playing in the same league? The sum total of all electromagnetic radiation produced by stars and by active galaxies (to name the two most important sources) over the course of the last billions of years, to name one: 2 ppm. Also, neutrinos produced during supernova explosions (at the end of the life of massive stars), or in the formation of white dwarfs (remnants of lower-mass stars like our Sun), or simply as part of the ordinary fusion processes that power ordinary stars: 3.2 ppm all in all.

Then, there’s binding energy: If two components are bound together, you will need to invest energy in order to separate them. That’s why binding energy is negative – it’s an energy deficit you will need to overcome to pry the system’s components apart. Nuclear binding energy, from stars fusing together light elements to form heavier ones, accounts for -6.3 ppm in the present universe – and the total gravitational binding energy accumulated as stars, galaxies, galaxy clusters, other gravitationally bound objects and the large-scale structure of the universe have formed over the past 14 or so billion years, for an even larger -13.4 ppm. All in all, the negative contributions from binding energy more than cancel out all the positive contributions by planets, radiation, neutrinos etc. we’ve listed so far.

Which brings us to the next level. In order to visualize larger contributions, we need a change scale. In box 2, one square will represent a fraction of 1/20,000 or 0.00005. Put differently: Fifty of the little squares in the first box correspond to a single square in the second box:

So here, without further ado, is box 2 (including, in the upper right corner, a scale model of the first box):

Now we are in the realm of stars and related objects. By measuring the luminosity of galaxies, and using standard relations between the masses and luminosity of stars (“mass-to-light-ratio”), you can get a first estimate for the total mass (equivalently: energy) contained in stars. You’ll also need to use the empirical relation (“initial mass function”) for how this mass is distributed, though: How many massive stars should there be? How many lower-mass stars? Since different stars have different lifetimes (live massively, die young), this gives estimates for how many stars out there are still in the prime of life (“main sequence stars”) and how many have already died, leaving white dwarfs (from low-mass stars), neutron stars (from more massive stars) or stellar black holes (from even more massive stars) behind. The mass distribution also provides you with an estimate of how much mass there is in substellar objects such as brown dwarfs – objects which never had sufficient mass to make it to stardom in the first place.

Let’s start small with the neutron stars at 0.00005 (1 square, at our current scale) and the stellar black holes (0.00007). Interestingly, those are outweighed by brown dwarfs which, individually, have much less mass, but of which there are, apparently, really a lot (0.00014; this is typical of stellar mass distribution – lots of low-mass stars, much fewer massive ones.) Next come white dwarfs as the remnants of lower-mass stars like our Sun (0.00036). And then, much more than all the remnants or substellar objects combined, ordinary, main sequence stars like our Sun and its higher-mass and (mostly) lower-mass brethren (0.00205).

Interestingly enough, in this box, stars and related objects contribute about as much mass (or energy) as more undifferentiated types of matter: molecular gas (mostly hydrogen molecules, at 0.00016), hydrogen and helium atoms (HI and HeI, 0.00062) and, most notably, the plasma that fills the void between galaxies in large clusters (0.0018) add up to a whopping 0.00258. Stars, brown dwarfs and remnants add up to 0.00267.

Further contributions with about the same order of magnitude are survivors from our universe’s most distant past: The cosmic background radiation (CMB), remnant of the extremely hot radiation interacting with equally hot plasma in the big bang phase, contributes 0.00005; the lesser-known cosmic neutrino background, another remnant of that early equilibrium, contributes a remarkable 0.0013. The binding energy from the first primordial fusion events (formation of light elements within those famous “first three minutes”) gives another contribution in this range: -0.00008.

While, in the previous box, the matter we love, know and need was not dominant, it at least made a dent. This changes when we move on to box 3. In this box, one square corresponds to 0.005. In other words: 100 squares from box 2 add up to a single square in box 3:

Box 3 is the last box of our chart. Again, a scale model of box 2 is added for comparison: All that’s in box 2 corresponds to one-square-and-a-bit in box 3.

The first new contribution: warm intergalactic plasma. Its presence is deduced from the overall amount of ordinary matter (which follows from measurements of the cosmic background radiation, combined with data from surveys and measurements of the abundances of light elements) as compared with the ordinary matter that has actually been detected (as plasma, stars, e.g.). From models of large-scale structure formation, it follows that this missing matter should come in the shape (non-shape?) of a diffuse plasma, which isn’t dense (or hot) enough to allow for direct detection. This cosmic filler substance amounts to 0.04, or 85% of ordinary matter, showing just how much of a fringe phenomena those astronomical objects we usually hear and read about really are.

The final two (dominant) contributions come as no surprise for anyone keeping up with basic cosmology: dark matter at 23% is, according to simulations, the backbone of cosmic large-scale structure, with ordinary matter no more than icing on the cake. Last but not least, there’s dark energy with its contribution of 72%, responsible both for the cosmos’ accelerated expansion and for the 2011 physics Nobel Prize.

Minority inhabitants of a part-per-million type of object made of non-standard cosmic matter – that’s us. But at the same time, we are a species, that, its cosmic fringe position notwithstanding, has made remarkable strides in unravelling the big picture – including the cosmic inventory represented in this chart.

__________________________________________

Here is the full chart for you to download: the PNG version (1200×900 px, 233 kB) or the lovingly hand-crafted SVG version (29 kB).

The chart “The Cosmic Energy Inventory” is licensed under Creative Commons BY-NC-SA 3.0. In short: You’re free to use it non-commercially; you must add the proper credit line “Markus Pössel [www.haus-der-astronomie.de]”; if you adapt the work, the result must be available under this or a similar license.

Technical notes: As is common in astrophysics, Fukugita and Peebles give densities as fractions of the so-called critical density; in the usual cosmological models, that density, evaluated at any given time (in this case: the present), is critical for determining the geometry of the universe. Using very precise measurements of the cosmic background radiation, we know that the average density of the universe is indistinguishable from the critical density. For simplicity’s sake, I’m skipping this detour in the main text and quoting all of F & P’s numbers as “fractions of the universe’s total energy (density)”.

For the supermassive black hole contributions, I’ve neglected the fraction ?n in F & P’s article; that’s why I’m quoting a lower limit only. The real number could theoretically be twice the quoted value; it’s apparently more likely to be close to the value given here, though. For my gravitational binding energy, I’ve added F & P’s primeval gravitational binding energy (no. 4 in their list) and their binding energy from dissipative gravitational settling (no. 5).

The fact that the content of box 3 adds up not quite to 1, but to 0.997, is an artefact of rounding not quite consistently when going from box 2 to box 3. I wanted to keep the sum of all that’s in box 2 at the precision level of that box.

GALEX Confirms Nature of Dark Energy

New results from NASA's Galaxy Evolution Explorer and the Anglo-Australian Telescope atop Siding Spring Mountain in Australia confirm that dark energy (represented by purple grid) is a smooth, uniform force that now dominates over the effects of gravity (green grid). The observations follow from careful measurements of the separations between pairs of galaxies (examples of such pairs are illustrated here).Image credit: NASA/JPL-Caltech

[/caption]

From a JPL press release:

A five-year survey of 200,000 galaxies, stretching back seven billion years in cosmic time, has led to one of the best independent confirmations that dark energy is driving our universe apart at accelerating speeds. The survey used data from NASA’s space-based Galaxy Evolution Explorer and the Anglo-Australian Telescope on Siding Spring Mountain in Australia.

The findings offer new support for the favored theory of how dark energy works — as a constant force, uniformly affecting the universe and propelling its runaway expansion. They contradict an alternate theory, where gravity, not dark energy, is the force pushing space apart. According to this alternate theory, with which the new survey results are not consistent, Albert Einstein’s concept of gravity is wrong, and gravity becomes repulsive instead of attractive when acting at great distances.

“The action of dark energy is as if you threw a ball up in the air, and it kept speeding upward into the sky faster and faster,” said Chris Blake of the Swinburne University of Technology in Melbourne, Australia. Blake is lead author of two papers describing the results that appeared in recent issues of the Monthly Notices of the Royal Astronomical Society. “The results tell us that dark energy is a cosmological constant, as Einstein proposed. If gravity were the culprit, then we wouldn’t be seeing these constant effects of dark energy throughout time.”

Dark energy is thought to dominate our universe, making up about 74 percent of it. Dark matter, a slightly less mysterious substance, accounts for 22 percent. So-called normal matter, anything with atoms, or the stuff that makes up living creatures, planets and stars, is only approximately four percent of the cosmos.

The idea of dark energy was proposed during the previous decade, based on studies of distant exploding stars called supernovae. Supernovae emit constant, measurable light, making them so-called “standard candles,” which allows calculation of their distance from Earth. Observations revealed dark energy was flinging the objects out at accelerating speeds.

his diagram illustrates two ways to measure how fast the universe is expanding -- the "standard candle" method, which involves exploded stars in galaxies, and the "standard ruler" method, which involves pairs of galaxies. Image credit: NASA/JPL-Caltech

Dark energy is in a tug-of-war contest with gravity. In the early universe, gravity took the lead, dominating dark energy. At about 8 billion years after the Big Bang, as space expanded and matter became diluted, gravitational attractions weakened and dark energy gained the upper hand. Billions of years from now, dark energy will be even more dominant. Astronomers predict our universe will be a cosmic wasteland, with galaxies spread apart so far that any intelligent beings living inside them wouldn’t be able to see other galaxies.

The new survey provides two separate methods for independently checking the supernovae results. This is the first time astronomers performed these checks across the whole cosmic timespan dominated by dark energy. The team began by assembling the largest three-dimensional map of galaxies in the distant universe, spotted by the Galaxy Evolution Explorer. The ultraviolet-sensing telescope has scanned about three-quarters of the sky, observing hundreds of millions of galaxies.

“The Galaxy Evolution Explorer helped identify bright, young galaxies, which are ideal for this type of study,” said Christopher Martin, principal investigator for the mission at the California Institute of Technology in Pasadena. “It provided the scaffolding for this enormous 3-D map.”

The astronomers acquired detailed information about the light for each galaxy using the Anglo-Australian Telescope and studied the pattern of distance between them. Sound waves from the very early universe left imprints in the patterns of galaxies, causing pairs of galaxies to be separated by approximately 500 million light-years.

This “standard ruler” was used to determine the distance from the galaxy pairs to Earth — the closer a galaxy pair is to us, the farther apart the galaxies will appear from each other on the sky. As with the supernovae studies, this distance data were combined with information about the speeds at which the pairs are moving away from us, revealing, yet again, the fabric of space is stretching apart faster and faster.

The team also used the galaxy map to study how clusters of galaxies grow over time like cities, eventually containing many thousands of galaxies. The clusters attract new galaxies through gravity, but dark energy tugs the clusters apart. It slows down the process, allowing scientists to measure dark energy’s repulsive force.

“Observations by astronomers over the last 15 years have produced one of the most startling discoveries in physical science; the expansion of the universe, triggered by the Big Bang, is speeding up,” said Jon Morse, astrophysics division director at NASA Headquarters in Washington. “Using entirely independent methods, data from the Galaxy Evolution Explorer have helped increase our confidence in the existence of dark energy.”

For more information see the Australian Astronomical Observatory

Antigravity Could Replace Dark Energy as Cause of Universe’s Expansion

Annihilation
Illustration of Antimatter/Matter Annihilation. (NASA/CXC/M. Weiss)

[/caption]

Since the late 20th century, astronomers have been aware of data that suggest the universe is not only expanding, but expanding at an accelerating rate. According to the currently accepted model, this accelerated expansion is due to dark energy, a mysterious repulsive force that makes up about 73% of the energy density of the universe. Now, a new study reveals an alternative theory: that the expansion of the universe is actually due to the relationship between matter and antimatter. According to this study, matter and antimatter gravitationally repel each other and create a kind of “antigravity” that could do away with the need for dark energy in the universe.

Massimo Villata, a scientist from the Observatory of Turin in Italy, began the study with two major assumptions. First, he posited that both matter and antimatter have positive mass and energy density. Traditionally, the gravitational influence of a particle is determined solely by its mass. A positive mass value indicates that the particle will attract other particles gravitationally. Under Villata’s assumption, this applies to antiparticles as well. So under the influence of gravity, particles attract other particles and antiparticles attract other antiparticles. But what kind of force occurs between particles and antiparticles?

To resolve this question, Villata needed to institute the second assumption – that general relativity is CPT invariant. This means that the laws governing an ordinary matter particle in an ordinary field in spacetime can be applied equally well to scenarios in which charge (electric charge and internal quantum numbers), parity (spatial coordinates) and time are reversed, as they are for antimatter. When you reverse the equations of general relativity in charge, parity and time for either the particle or the field the particle is traveling in, the result is a change of sign in the gravity term, making it negative instead of positive and implying so-called antigravity between the two.

Villata cited the quaint example of an apple falling on Isaac Newton’s head. If an anti-apple falls on an anti-Earth, the two will attract and the anti-apple will hit anti-Newton on the head; however, an anti-apple cannot “fall” on regular old Earth, which is made of regular old matter. Instead, the anti-apple will fly away from Earth because of gravity’s change in sign. In other words, if general relativity is, in fact, CPT invariant, antigravity would cause particles and antiparticles to mutually repel. On a much larger scale, Villata claims that the universe is expanding because of this powerful repulsion between matter and antimatter.

What about the fact that matter and antimatter are known to annihilate each other? Villata resolved this paradox by placing antimatter far away from matter, in the enormous voids between galaxy clusters. These voids are believed to have stemmed from tiny negative fluctuations in the primordial density field and do seem to possess a kind of antigravity, repelling all matter away from them. Of course, the reason astronomers don’t actually observe any antimatter in the voids is still up in the air. In Villata’s words, “There is more than one possible answer, which will be investigated elsewhere.” The research appears in this month’s edition of Europhysics Letters.

Hubble Rules Out One Alternative to Dark Energy

NGC 5584. Credit: NASA, ESA, A. Riess (STScI/JHU), L. Macri (Texas A&M University), and Hubble Heritage Team (STScI/AURA)

[/caption]

From a NASA press release:

Astronomers using NASA’s Hubble Space Telescope have ruled out an alternate theory on the nature of dark energy after recalculating the expansion rate of the universe to unprecedented accuracy.

The universe appears to be expanding at an increasing rate. Some believe that is because the universe is filled with a dark energy that works in the opposite way of gravity. One alternative to that hypothesis is that an enormous bubble of relatively empty space eight billion light-years across surrounds our galactic neighborhood. If we lived near the center of this void, observations of galaxies being pushed away from each other at accelerating speeds would be an illusion.

This hypothesis has been invalidated because astronomers have refined their understanding of the universe’s present expansion rate. Adam Riess of the Space Telescope Science Institute (STScI) and Johns Hopkins University in Baltimore, Md., led the research. The Hubble observations were conducted by the SHOES (Supernova H0 for the Equation of State) team that works to refine the accuracy of the Hubble constant to a precision that allows for a better characterization of dark energy’s behavior. The observations helped determine a figure for the universe’s current expansion rate to an uncertainty of just 3.3 percent. The new measurement reduces the error margin by 30 percent over Hubble’s previous best measurement in 2009. Riess’s results appear in the April 1 issue of The Astrophysical Journal.

“We are using the new camera on Hubble like a policeman’s radar gun to catch the universe speeding,” Riess said. “It looks more like it’s dark energy that’s pressing the gas pedal.”

Riess’ team first had to determine accurate distances to galaxies near and far from Earth. The team compared those distances with the speed at which the galaxies are apparently receding because of the expansion of space. They used those two values to calculate the Hubble constant, the number that relates the speed at which a galaxy appears to recede to its distance from the Milky Way. Because astronomers cannot physically measure the distances to galaxies, researchers had to find stars or other objects that serve as reliable cosmic yardsticks. These are objects with an intrinsic brightness, brightness that hasn’t been dimmed by distance, an atmosphere, or stellar dust, that is known. Their distances, therefore, can be inferred by comparing their true brightness with their apparent brightness as seen from Earth.

To calculate longer distances, Riess’ team chose a special class of exploding stars called Type 1a supernovae. These stellar explosions all flare with similar luminosity and are brilliant enough to be seen far across the universe. By comparing the apparent brightness of Type 1a supernovae and pulsating Cepheid stars, the astronomers could measure accurately their intrinsic brightness and therefore calculate distances to Type Ia supernovae in far-flung galaxies.

Using the sharpness of the new Wide Field Camera 3 (WFC3) to study more stars in visible and near-infrared light, scientists eliminated systematic errors introduced by comparing measurements from different telescopes.

“WFC3 is the best camera ever flown on Hubble for making these measurements, improving the precision of prior measurements in a small fraction of the time it previously took,” said Lucas Macri, a collaborator on the SHOES Team from Texas A&M in College Station.

Knowing the precise value of the universe’s expansion rate further restricts the range of dark energy’s strength and helps astronomers tighten up their estimates of other cosmic properties, including the universe’s shape and its roster of neutrinos, or ghostly particles, that filled the early universe.

“Thomas Edison once said ‘every wrong attempt discarded is a step forward,’ and this principle still governs how scientists approach the mysteries of the cosmos,” said Jon Morse, astrophysics division director at NASA Headquarters in Washington. “By falsifying the bubble hypothesis of the accelerating expansion, NASA missions like Hubble bring us closer to the ultimate goal of understanding this remarkable property of our universe.”

Science Paper by: Adam G. Riess et al. (PDF document)

Astronomers Now Closer to Understanding Dark Energy

Dark Energy
The Hubble Space Telescope image of the inner regions of the lensing cluster Abell 1689 that is 2.2 billion light?years away. Light from distant background galaxies is bent by the concentrated dark matter in the cluster (shown in the blue overlay) to produce the plethora of arcs and arclets that were in turn used to constrain dark energy. Image courtesy of NASA?ESA, Jullo (JPL), Natarajan (Yale), Kneib (LAM)

Understanding something we can’t see has been a problem that astronomers have overcome in the past. Now, a group of scientists believe a new technique will meet the challenge of helping to solve one of the biggest mysteries in cosmology today: understanding the nature of dark energy. Using the strong gravitational lensing method — where a massive galaxy cluster acts as a cosmic magnifying lens — an international team of astronomers have been able to study elusive dark energy for the first time. The team reports that when combined with existing techniques, their results significantly improve current measurements of the mass and energy content of the universe.

Using data taken by the Hubble Space Telescope as well as ground-based telescopes, the team analyzed images of 34 extremely distant galaxies situated behind Abell 1689, one of the biggest and most massive known galaxy clusters in the universe.

Through the gravitational lens of Abell 1689, the astronomers, led by Eric Jullo from JPL and Priyamvada Natarajan from Yale University, were able to detect the faint, distant background galaxies—whose light was bent and projected by the cluster’s massive gravitational pull—in a similar way that the lens of a magnifying lens distorts an object’s image.

Using this method, they were able to reduce the overall error in its equation-of-state parameter by 30 percent, when combined with other methods.

The way in which the images were distorted gave the astronomers clues as to the geometry of the space that lies between the Earth, the cluster and the distant galaxies. “The content, geometry and fate of the universe are linked, so if you can constrain two of those things, you learn something about the third,” Natarajan said.

The team was able to narrow the range of current estimates about dark energy’s effect on the universe, denoted by the value w, by 30 percent. The team combined their new technique with other methods, including using supernovae, X-ray galaxy clusters and data from the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft, to constrain the value for w.

“Dark energy is characterized by the relationship between its pressure and its density: this is known as its equation of state,” said Jullo. “Our goal was to try to quantify this relationship. It teaches us about the properties of dark energy and how it has affected the development of the Universe.”

Dark energy makes up about 72 percent of all the mass and energy in the universe and will ultimately determine its fate. The new results confirm previous findings that the nature of dark energy likely corresponds to a flat universe. In this scenario, the expansion of the universe will continue to accelerate and the universe will expand forever.

The astronomers say the real strength of this new result is that it devises a totally new way to extract information about the elusive dark energy, and it offers great promise for future applications.

According to the scientists, their method required multiple, meticulous steps to develop. They spent several years developing specialized mathematical models and precise maps of the matter — both dark and “normal” — that together constitute the Abell 1689 cluster.

The findings appear in the August 20 issue of the journal Science.

Sources: Yale University, Science Express. ESA Hubble.

New Technique Could Track Down Dark Energy

Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF

[/caption]

From an NRAO press release:

Dark energy is the label scientists have given to what is causing the Universe to expand at an accelerating rate, and is believed to make up nearly three-fourths of the mass and energy of the Universe. While the acceleration was discovered in 1998, its cause remains unknown. Physicists have advanced competing theories to explain the acceleration, and believe the best way to test those theories is to precisely measure large-scale cosmic structures. A new technique developed for the Robert C. Byrd Green Bank Telescope (GBT) have given astronomers a new way to map large cosmic structures such as dark energy.

Sound waves in the matter-energy soup of the extremely early Universe are thought to have left detectable imprints on the large-scale distribution of galaxies in the Universe. The researchers developed a way to measure such imprints by observing the radio emission of hydrogen gas. Their technique, called intensity mapping, when applied to greater areas of the Universe, could reveal how such large-scale structure has changed over the last few billion years, giving insight into which theory of dark energy is the most accurate.

“Our project mapped hydrogen gas to greater cosmic distances than ever before, and shows that the techniques we developed can be used to map huge volumes of the Universe in three dimensions and to test the competing theories of dark energy,” said Tzu-Ching Chang, of the Academia Sinica in Taiwan and the University of Toronto.

To get their results, the researchers used the GBT to study a region of sky that previously had been surveyed in detail in visible light by the Keck II telescope in Hawaii. This optical survey used spectroscopy to map the locations of thousands of galaxies in three dimensions. With the GBT, instead of looking for hydrogen gas in these individual, distant galaxies — a daunting challenge beyond the technical capabilities of current instruments — the team used their intensity-mapping technique to accumulate the radio waves emitted by the hydrogen gas in large volumes of space including many galaxies.

“Since the early part of the 20th Century, astronomers have traced the expansion of the Universe by observing galaxies. Our new technique allows us to skip the galaxy-detection step and gather radio emissions from a thousand galaxies at a time, as well as all the dimly-glowing material between them,” said Jeffrey Peterson, of Carnegie Mellon University.

The astronomers also developed new techniques that removed both man-made radio interference and radio emission caused by more-nearby astronomical sources, leaving only the extremely faint radio waves coming from the very distant hydrogen gas. The result was a map of part of the “cosmic web” that correlated neatly with the structure shown by the earlier optical study. The team first proposed their intensity-mapping technique in 2008, and their GBT observations were the first test of the idea.

“These observations detected more hydrogen gas than all the previously-detected hydrogen in the Universe, and at distances ten times farther than any radio wave-emitting hydrogen seen before,” said Ue-Li Pen of the University of Toronto.

“This is a demonstration of an important technique that has great promise for future studies of the evolution of large-scale structure in the Universe,” said National Radio Astronomy Observatory Chief Scientist Chris Carilli, who was not part of the research team.

In addition to Chang, Peterson, and Pen, the research team included Kevin Bandura of Carnegie Mellon University. The scientists reported their work in the July 22 issue of the scientific journal Nature.

Using Gravitational Lensing to Measure Age and Size of Universe

A graviational lens image of the B1608+656 system. Image courtesy Sherry Suyu of the Argelander Institut für Astronomie in Bonn, Germany. Click on image for larger version.

[/caption]

Handy little tool, this gravitational lensing! Astronomers have used it to measure the shape of stars, look for exoplanets, and measure dark matter in distant galaxies. Now its being used to measure the size and age of the Universe. Researchers say this new use of gravitation lensing provides a very precise way to measure how rapidly the universe is expanding. The measurement determines a value for the Hubble constant, which indicates the size of the universe, and confirms the age of Universe as 13.75 billion years old, within 170 million years. The results also confirm the strength of dark energy, responsible for accelerating the expansion of the universe.

Gravitational lensing occurs when two galaxies happen to aligned with one another along our line of sight in the sky. The gravitational field of the nearer galaxy distorts the image of the more distant galaxy into multiple arc-shaped images. Sometimes this effect even creates a complete ring, known as an “Einstein Ring.”
Researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) used gravitational lensing to measure the distances light traveled from a bright, active galaxy to the earth along different paths. By understanding the time it took to travel along each path and the effective speeds involved, researchers could infer not just how far away the galaxy lies but also the overall scale of the universe and some details of its expansion.

Distinguishing distances in space is difficult. A bright light far away and a dimmer source lying much closer can look like they are at the same distance. A gravitational lens circumvents this problem by providing multiple clues as to the distance light travels. That extra information allows them to determine the size of the universe, often expressed by astrophysicists in terms of a quantity called Hubble’s constant.

“We’ve known for a long time that lensing is capable of making a physical measurement of Hubble’s constant,” KIPAC’s Phil Marshall said. However, gravitational lensing had never before been used in such a precise way. This measurement provides an equally precise measurement of Hubble’s constant as long-established tools such as observation of supernovae and the cosmic microwave background. “Gravitational lensing has come of age as a competitive tool in the astrophysicist’s toolkit,” Marshall said.

When a large nearby object, such as a galaxy, blocks a distant object, such as another galaxy, the light can detour around the blockage. But instead of taking a single path, light can bend around the object in one of two, or four different routes, thus doubling or quadrupling the amount of information scientists receive. As the brightness of the background galaxy nucleus fluctuates, physicists can measure the ebb and flow of light from the four distinct paths, such as in the B1608+656 system that was the subject of this study. Lead author on the study Sherry Suyu, from the University of Bonn, said, “In our case, there were four copies of the source, which appear as a ring of light around the gravitational lens.”

Though researchers do not know when light left its source, they can still compare arrival times. Marshall likens it to four cars taking four different routes between places on opposite sides of a large city, such as Stanford University to Lick Observatory, through or around San Jose. And like automobiles facing traffic snarls, light can encounter delays, too.

“The traffic density in a big city is like the mass density in a lens galaxy,” Marshall said. “If you take a longer route, it need not lead to a longer delay time. Sometimes the shorter distance is actually slower.”

The gravitational lens equations account for all the variables such as distance and density, and provide a better idea of when light left the background galaxy and how far it traveled.

In the past, this method of distance estimation was plagued by errors, but physicists now believe it is comparable with other measurement methods. With this technique, the researchers have come up with a more accurate lensing-based value for Hubble’s constant, and a better estimation of the uncertainty in that constant. By both reducing and understanding the size of error in calculations, they can achieve better estimations on the structure of the lens and the size of the universe.

There are several factors scientists still need to account for in determining distances with lenses. For example, dust in the lens can skew the results. The Hubble Space Telescope has infra-red filters useful for eliminating dust effects. The images also contain information about the number of galaxies lying around the line of vision; these contribute to the lensing effect at a level that needs to be taken into account.

Marshall says several groups are working on extending this research, both by finding new systems and further examining known lenses. Researchers are already aware of more than twenty other astronomical systems suitable for analysis with gravitational lensing.

These results of this study was published in the March 1 issue of The Astrophysical Journal. The researchers used data collected by the NASA/ESA Hubble Space Telescope, and showed the improved precision they provide in combination with the Wilkinson Microwave Anisotropy Probe (WMAP).

Source: SLAC

Quintessence

Quintessence is one idea – hypothesis – of what dark energy is (remember that dark energy is the shorthand expression of the apparent acceleration of the expansion of the universe … or the form of mass-energy which causes this observed acceleration, in cosmological models built with Einstein’s theory of general relativity).

The word quintessence means fifth essence, and is kinda cute … remember Earth, Water, Fire, and Air, the ‘four essences’ of the Ancient Greeks? Well, in modern cosmology, there are also four essences: normal matter, radiation (photons), cold dark matter, and neutrinos (which are hot dark matter!).

Quintessence covers a range of hypotheses (or models); the main difference between quintessence as a (possible) explanation for dark energy and the cosmological constant Λ (which harks back to Einstein and the early years of the 20th century) is that quintessence varies with time (albeit slooowly), and can also vary with location (space). One version of quintessence is phantom energy, in which the energy density increases with time, and leads to a Big Rip end of the universe.

Quintessence, as a scalar field, is not the least bit unusual in physics (the Newtonian gravitational potential field is one example, of a real scalar field; the Higgs field of the Standard Model of particle physics is an example of a complex scalar field); however, it has some difficulties in common with the cosmological constant (in a nutshell, how can it be so small).

Can quintessence be observed; or, rather, can quintessence be distinguished from a cosmological constant? In astronomy, yes … by finding a way to observed (and measure) the acceleration of the universe at widely different times (quintessence and Λ predict different results). Another way might be to observe variations in the fundamental constants (e.g. the fine structure constant) or violations of Einstein’s equivalence principle.

One project seeking to measure the acceleration of the universe more accurately was ESSENCE (“Equation of State: SupErNovae trace Cosmic Expansion”).

In 1999, CERN Courier published a nice summary of cosmology as it was understood then, a year after the discovery of dark energy The quintessence of cosmology (it’s well worth a read, though a lot has happened in the past decade).

Universe Today articles? Yep! For example Will the Universe Expand Forever?, More Evidence for Dark Energy, and Hubble Helps Measure the Pace of Dark Energy.

Astronomy Cast episodes relevant to quintessence include What is the universe expanding into?, and A Universe of Dark Energy.

Source: NASA

New Search for Dark Energy Goes Back in Time

This is a previous optical image of one of the approximately 200 quasars captured in the Baryon Oscillation Spectroscopic Survey (BOSS) "first light" exposure is shown at top, with the BOSS spectrum of the object at bottom. The spectrum allows astronomers to determine the object's redshift. With millions of such spectra, BOSS will measure the geometry of the Universe. Credit: David Hogg, Vaishali Bhardwaj, and Nic Ross of SDSS-III

[/caption]
Baryon acoustic oscillation (BAO) sounds like it could be technobabble from a Star Trek episode. BAO is real, but astronomers are searching for these particle fluctuations to do what seems like science fiction: look back in time to find clues about dark energy. The Baryon Oscillation Spectroscopic Survey(BOSS), a part of the Sloan Digital Sky Survey III (SDSS-III), took its “first light” of astronomical data last month, and will map the expansion history of the Universe.

“Baryon oscillation is a fast-maturing method for measuring dark energy in a way that’s complementary to the proven techniques of supernova cosmology,” said David Schlegel from the Lawrence Berkeley National Laboratory (Berkeley Lab), the Principal Investigator of BOSS. “The data from BOSS will be some of the best ever obtained on the large-scale structure of the Universe.”

BOSS uses the same telescope as the original Sloan Digital Sky Survey — 2.5-meter telescope
at Apache Point Observatory in New Mexico — but equipped with new, specially-built spectrographs to measure the spectra.

Senior Operations Engineer Dan Long loads the first cartridge of the night into the Sloan Digital Sky Survey telescope. The cartridge holds a “plug-plate” at the top which then holds a thousand optical fibers shown in red and blue. These cartridges are locked into the base of the telescope and are changed many times during a night. Photo credit: D. Long
Senior Operations Engineer Dan Long loads the first cartridge of the night into the Sloan Digital Sky Survey telescope. The cartridge holds a “plug-plate” at the top which then holds a thousand optical fibers shown in red and blue. These cartridges are locked into the base of the telescope and are changed many times during a night. Photo credit: D. Long

Baryon oscillations began when pressure waves traveled through the early universe. The same density variations left their mark as the Universe evolved, in the periodic clustering of visible matter in galaxies, quasars, and intergalactic gas, as well as in the clumping of invisible dark matter.

Comparing these scales at different eras makes it possible to trace the details of how the Universe has expanded throughout its history – information that can be used to distinguish among competing theories of dark energy.

“Like sound waves passing through air, the waves push some of the matter closer together as they travel” said Nikhil Padmanabhan, a BOSS researcher who recently moved from Berkeley Lab to Yale University. “In the early universe, these waves were moving at half the speed of light, but when the universe was only a few hundred thousand years old, the universe cooled enough to halt the waves, leaving a signature 500 million light-years in length.”

“We can see these frozen waves in the distribution of galaxies today,” said Daniel Eisenstein of the University of Arizona, the Director of the SDSS-III. “By measuring the length of the baryon oscillations, we can determine how dark energy has affected the expansion history of the universe. That in turn helps us figure out what dark energy could be.”

“Studying baryon oscillations is an exciting method for measuring dark energy in a way that’s complementary to techniques in supernova cosmology,” said Kyle Dawson of the University of Utah, who is leading the commissioning of BOSS. “BOSS’s galaxy measurements will be a revolutionary dataset that will provide rich insights into the universe,” added Martin White of Berkeley Lab, BOSS’s survey
scientist.

On Sept. 14-15, 2009, astronomers used BOSS to measure the spectra of a thousand galaxies and quasars. The goal of BOSS is to measure 1.4 million luminous red galaxies at redshifts up to 0.7 (when the Universe was roughly seven billion years old) and 160,000 quasars at redshifts between 2.0 and 3.0 (when the Universe was only about three billion years old). BOSS will also measure variations in the density of hydrogen gas between the galaxies. The observation program will take five years.

Source: Sloan Digital Sky Survey

Variability in Type 1A Supernovae Has Implications for Studying Dark Energy

A Hubble Space Telescope-Image of Supernova 1994D (SN1994D) in galaxy NGC 4526 (SN 1994D is the bright spot on the lower left). Image Credit:HST

[/caption]

The discovery of dark energy, a mysterious force that is accelerating the expansion of the universe, was based on observations of type 1a supernovae, and these stellar explosions have long been used as “standard candles” for measuring the expansion. But not all type 1A supernovae are created equal. A new study reveals sources of variability in these supernovae, and to accurately probe the nature of dark energy and determine if it is constant or variable over time, scientists will have to find a way to measure cosmic distances with much greater precision than they have in the past.

“As we begin the next generation of cosmology experiments, we will want to use type 1a supernovae as very sensitive measures of distance,” said lead author Daniel Kasen, of a study published in Nature this week. “We know they are not all the same brightness, and we have ways of correcting for that, but we need to know if there are systematic differences that would bias the distance measurements. So this study explored what causes those differences in brightness.”

Kasen and his coauthors–Fritz Röpke of the Max Planck Institute for Astrophysics in Garching, Germany, and Stan Woosley, professor of astronomy and astrophysics at UC Santa Cruz–used supercomputers to run dozens of simulations of type 1a supernovae. The results indicate that much of the diversity observed in these supernovae is due to the chaotic nature of the processes involved and the resulting asymmetry of the explosions.

For the most part, this variability would not produce systematic errors in measurement studies as long as researchers use large numbers of observations and apply the standard corrections, Kasen said. The study did find a small but potentially worrisome effect that could result from systematic differences in the chemical compositions of stars at different times in the history of the universe. But researchers can use the computer models to further characterize this effect and develop corrections for it.

A type 1a supernova occurs when a white dwarf star acquires additional mass by siphoning matter away from a companion star. When it reaches a critical mass–1.4 times the mass of the Sun, packed into an object the size of the Earth–the heat and pressure in the center of the star spark a runaway nuclear fusion reaction, and the white dwarf explodes. Since the initial conditions are about the same in all cases, these supernovae tend to have the same luminosity, and their “light curves” (how the luminosity changes over time) are predictable.

Some are intrinsically brighter than others, but these flare and fade more slowly, and this correlation between the brightness and the width of the light curve allows astronomers to apply a correction to standardize their observations. So astronomers can measure the light curve of a type 1a supernova, calculate its intrinsic brightness, and then determine how far away it is, since the apparent brightness diminishes with distance (just as a candle appears dimmer at a distance than it does up close).

The computer models used to simulate these supernovae in the new study are based on current theoretical understanding of how and where the ignition process begins inside the white dwarf and where it makes the transition from slow-burning combustion to explosive detonation.

The simulations showed that the asymmetry of the explosions is a key factor determining the brightness of type 1a supernovae. “The reason these supernovae are not all the same brightness is closely tied to this breaking of spherical symmetry,” Kasen said.

The dominant source of variability is the synthesis of new elements during the explosions, which is sensitive to differences in the geometry of the first sparks that ignite a thermonuclear runaway in the simmering core of the white dwarf. Nickel-56 is especially important, because the radioactive decay of this unstable isotope creates the afterglow that astronomers are able to observe for months or even years after the explosion.

“The decay of nickel-56 is what powers the light curve. The explosion is over in a matter of seconds, so what we see is the result of how the nickel heats the debris and how the debris radiates light,” Kasen said.

Kasen developed the computer code to simulate this radiative transfer process, using output from the simulated explosions to produce visualizations that can be compared directly to astronomical observations of supernovae.

The good news is that the variability seen in the computer models agrees with observations of type 1a supernovae. “Most importantly, the width and peak luminosity of the light curve are correlated in a way that agrees with what observers have found. So the models are consistent with the observations on which the discovery of dark energy was based,” Woosley said.

Another source of variability is that these asymmetric explosions look different when viewed at different angles. This can account for differences in brightness of as much as 20 percent, Kasen said, but the effect is random and creates scatter in the measurements that can be statistically reduced by observing large numbers of supernovae.

The potential for systematic bias comes primarily from variation in the initial chemical composition of the white dwarf star. Heavier elements are synthesized during supernova explosions, and debris from those explosions is incorporated into new stars. As a result, stars formed recently are likely to contain more heavy elements (higher “metallicity,” in astronomers’ terminology) than stars formed in the distant past.

“That’s the kind of thing we expect to evolve over time, so if you look at distant stars corresponding to much earlier times in the history of the universe, they would tend to have lower metallicity,” Kasen said. “When we calculated the effect of this in our models, we found that the resulting errors in distance measurements would be on the order of 2 percent or less.”

Further studies using computer simulations will enable researchers to characterize the effects of such variations in more detail and limit their impact on future dark-energy experiments, which might require a level of precision that would make errors of 2 percent unacceptable.

Source: EurekAlert