Variability in Type 1A Supernovae Has Implications for Studying Dark Energy

[/caption]

The discovery of dark energy, a mysterious force that is accelerating the expansion of the universe, was based on observations of type 1a supernovae, and these stellar explosions have long been used as “standard candles” for measuring the expansion. But not all type 1A supernovae are created equal. A new study reveals sources of variability in these supernovae, and to accurately probe the nature of dark energy and determine if it is constant or variable over time, scientists will have to find a way to measure cosmic distances with much greater precision than they have in the past.

“As we begin the next generation of cosmology experiments, we will want to use type 1a supernovae as very sensitive measures of distance,” said lead author Daniel Kasen, of a study published in Nature this week. “We know they are not all the same brightness, and we have ways of correcting for that, but we need to know if there are systematic differences that would bias the distance measurements. So this study explored what causes those differences in brightness.”

Kasen and his coauthors–Fritz Röpke of the Max Planck Institute for Astrophysics in Garching, Germany, and Stan Woosley, professor of astronomy and astrophysics at UC Santa Cruz–used supercomputers to run dozens of simulations of type 1a supernovae. The results indicate that much of the diversity observed in these supernovae is due to the chaotic nature of the processes involved and the resulting asymmetry of the explosions.

For the most part, this variability would not produce systematic errors in measurement studies as long as researchers use large numbers of observations and apply the standard corrections, Kasen said. The study did find a small but potentially worrisome effect that could result from systematic differences in the chemical compositions of stars at different times in the history of the universe. But researchers can use the computer models to further characterize this effect and develop corrections for it.

A type 1a supernova occurs when a white dwarf star acquires additional mass by siphoning matter away from a companion star. When it reaches a critical mass–1.4 times the mass of the Sun, packed into an object the size of the Earth–the heat and pressure in the center of the star spark a runaway nuclear fusion reaction, and the white dwarf explodes. Since the initial conditions are about the same in all cases, these supernovae tend to have the same luminosity, and their “light curves” (how the luminosity changes over time) are predictable.

Some are intrinsically brighter than others, but these flare and fade more slowly, and this correlation between the brightness and the width of the light curve allows astronomers to apply a correction to standardize their observations. So astronomers can measure the light curve of a type 1a supernova, calculate its intrinsic brightness, and then determine how far away it is, since the apparent brightness diminishes with distance (just as a candle appears dimmer at a distance than it does up close).

The computer models used to simulate these supernovae in the new study are based on current theoretical understanding of how and where the ignition process begins inside the white dwarf and where it makes the transition from slow-burning combustion to explosive detonation.

The simulations showed that the asymmetry of the explosions is a key factor determining the brightness of type 1a supernovae. “The reason these supernovae are not all the same brightness is closely tied to this breaking of spherical symmetry,” Kasen said.

The dominant source of variability is the synthesis of new elements during the explosions, which is sensitive to differences in the geometry of the first sparks that ignite a thermonuclear runaway in the simmering core of the white dwarf. Nickel-56 is especially important, because the radioactive decay of this unstable isotope creates the afterglow that astronomers are able to observe for months or even years after the explosion.

“The decay of nickel-56 is what powers the light curve. The explosion is over in a matter of seconds, so what we see is the result of how the nickel heats the debris and how the debris radiates light,” Kasen said.

Kasen developed the computer code to simulate this radiative transfer process, using output from the simulated explosions to produce visualizations that can be compared directly to astronomical observations of supernovae.

The good news is that the variability seen in the computer models agrees with observations of type 1a supernovae. “Most importantly, the width and peak luminosity of the light curve are correlated in a way that agrees with what observers have found. So the models are consistent with the observations on which the discovery of dark energy was based,” Woosley said.

Another source of variability is that these asymmetric explosions look different when viewed at different angles. This can account for differences in brightness of as much as 20 percent, Kasen said, but the effect is random and creates scatter in the measurements that can be statistically reduced by observing large numbers of supernovae.

The potential for systematic bias comes primarily from variation in the initial chemical composition of the white dwarf star. Heavier elements are synthesized during supernova explosions, and debris from those explosions is incorporated into new stars. As a result, stars formed recently are likely to contain more heavy elements (higher “metallicity,” in astronomers’ terminology) than stars formed in the distant past.

“That’s the kind of thing we expect to evolve over time, so if you look at distant stars corresponding to much earlier times in the history of the universe, they would tend to have lower metallicity,” Kasen said. “When we calculated the effect of this in our models, we found that the resulting errors in distance measurements would be on the order of 2 percent or less.”

Further studies using computer simulations will enable researchers to characterize the effects of such variations in more detail and limit their impact on future dark-energy experiments, which might require a level of precision that would make errors of 2 percent unacceptable.

Source: EurekAlert

New Cosmic “Yardstick” Could Help Understand Dark Energy

[/caption]
A new method for measuring large astronomical distances is providing researchers with a cosmic yardstick to determine precisely how far away distant galaxies are. This could also offer a way to help determine how fast the Universe is expanding, as well as the nature of the mysterious Dark Energy that pervades the Universe. “We measured a direct, geometric distance to the galaxy, independent of the complications and assumptions inherent in other techniques. The measurement highlights a valuable method that can be used to determine the local expansion rate of the Universe, which is essential in our quest to find the nature of Dark Energy,” said James Braatz, of the National Radio Astronomy Observatory (NRAO), who spoke today at the American Astronomical Society’s meeting in Pasadena, California.

Braatz and his colleagues used the National Science Foundation’s Very Long Baseline Array (VLBA) and Robert C. Byrd Green Bank Telescope (GBT), and the Effelsberg Radio Telescope of the Max Planck Institute for Radioastronomy (MPIfR) in Germany to determine that a galaxy dubbed UGC 3789 is 160 million light-years from Earth. To do this, they precisely measured both the linear and angular size of a disk of material orbiting the galaxy’s central black hole. Water molecules in the disk act as masers to amplify, or strengthen, radio waves the way lasers amplify light waves.

The observation is a key element of a major effort to measure the expansion rate of the Universe, known as the Hubble Constant, with greatly improved precision. That effort, cosmologists say, is the best way to narrow down possible explanations for the nature of Dark Energy. “The new measurement is important because it demonstrates a one-step, geometric technique for measuring distances to galaxies far enough to infer the expansion rate of the Universe,” said Braatz.
Dark Energy was discovered in 1998 with the observation that the expansion of the Universe is accelerating. It constitutes 70 percent of the matter and energy in the Universe, but its nature remains unknown. Determining its nature is one of the most important problems in astrophysics.

“Measuring precise distances is one of the oldest problems in astronomy, and applying a relatively new radio-astronomy technique to this old problem is vital to solving one of the greatest challenges of 21st Century astrophysics,” said team member Mark Reid of the Harvard-Smithsonian Center for Astrophysics (CfA).

The work on UGC 3789 follows a landmark measurement done with the VLBA in 1999, in which the distance to the galaxy NGC 4258 — 23 million light-years — was directly measured by observing water masers in a disk of material orbiting its central black hole. That measurement allowed refinement of other, indirect distance-measuring techniques using variable stars as “standard candles.”

The measurement to UGC 3789 adds a new milepost seven times more distant than NGC 4258, which itself is too close to measure the Hubble Constant directly. The speed at which NGC 4258 is receding from the Milky Way can be influenced by local effects. “UGC 3789 is far enough that the speed at which it is moving away from the Milky Way is more indicative of the expansion of the Universe,” said team member Elizabeth Humphreys of the CfA.

Following the achievement with NGC 4258, astronomers used the highly-sensitive GBT to search for other galaxies with similar water-molecule masers in disks orbiting their central black holes. Once candidates were found, astronomers then used the VLBA and the GBT together with the Effelsberg telescope to make images of the disks and measure their detailed rotational structure, needed for the distance measurements. This effort requires multi-year observations of each galaxy. UGC 3789 is the first galaxy in the program to yield such a precise distance.

Team member Cheng-Yu Kuo of the University of Virginia presented an image of the maser disk in NGC 6323, a galaxy even more distant than UGC 3789. This is a step toward using this galaxy to provide another valuable cosmic milepost. “The very high sensitivity of the telescopes allows making such images of galaxies even beyond 300 million light years,” said Kuo.

Source: AAS

Astronomers Closing in on Dark Energy with Refined Hubble Constant



The name “dark energy” is just a placeholder for the force — whatever it is — that is causing the Universe to expand. But astronomers are perhaps getting closer to understanding this force. New observations of several Cepheid variable stars by the Hubble Space Telescope has refined the measurement of the Universe’s present expansion rate to a precision where the error is smaller than five percent. The new value for the expansion rate, known as the Hubble constant, or H0 (after Edwin Hubble who first measured the expansion of the universe nearly a century ago), is 74.2 kilometers per second per megaparsec (error margin of ± 3.6). The results agree closely with an earlier measurement gleaned from Hubble of 72 ± 8 km/sec/megaparsec, but are now more than twice as precise.

The Hubble measurement, conducted by the SHOES (Supernova H0 for the Equation of State) Team and led by Adam Riess, of the Space Telescope Science Institute and the Johns Hopkins University, uses a number of refinements to streamline and strengthen the construction of a cosmic “distance ladder,” a billion light-years in length, that astronomers use to determine the universe’s expansion rate.

Hubble observations of the pulsating Cepheid variables in a nearby cosmic mile marker, the galaxy NGC 4258, and in the host galaxies of recent supernovae, directly link these distance indicators. The use of Hubble to bridge these rungs in the ladder eliminated the systematic errors that are almost unavoidably introduced by comparing measurements from different telescopes.

Steps to the Hubble Constant.  Credit: NASA, ESA, and A. Feild (STScI)
Steps to the Hubble Constant. Credit: NASA, ESA, and A. Feild (STScI)

Riess explains the new technique: “It’s like measuring a building with a long tape measure instead of moving a yard stick end over end. You avoid compounding the little errors you make every time you move the yardstick. The higher the building, the greater the error.”

Lucas Macri, professor of physics and astronomy at Texas A&M, and a significant contributor to the results, said, “Cepheids are the backbone of the distance ladder because their pulsation periods, which are easily observed, correlate directly with their luminosities. Another refinement of our ladder is the fact that we have observed the Cepheids in the near-infrared parts of the electromagnetic spectrum where these variable stars are better distance indicators than at optical wavelengths.”

This new, more precise value of the Hubble constant was used to test and constrain the properties of dark energy, the form of energy that produces a repulsive force in space, which is causing the expansion rate of the universe to accelerate.

By bracketing the expansion history of the universe between today and when the universe was only approximately 380,000 years old, the astronomers were able to place limits on the nature of the dark energy that is causing the expansion to speed up. (The measurement for the far, early universe is derived from fluctuations in the cosmic microwave background, as resolved by NASA’s Wilkinson Microwave Anisotropy Probe, WMAP, in 2003.)

Their result is consistent with the simplest interpretation of dark energy: that it is mathematically equivalent to Albert Einstein’s hypothesized cosmological constant, introduced a century ago to push on the fabric of space and prevent the universe from collapsing under the pull of gravity. (Einstein, however, removed the constant once the expansion of the universe was discovered by Edwin Hubble.)

Detail from NGC 3021. Credit: NASA, ESA, and A. Riess (STScI/JHU)
Detail from NGC 3021. Credit: NASA, ESA, and A. Riess (STScI/JHU)

“If you put in a box all the ways that dark energy might differ from the cosmological constant, that box would now be three times smaller,” says Riess. “That’s progress, but we still have a long way to go to pin down the nature of dark energy.”

Though the cosmological constant was conceived of long ago, observational evidence for dark energy didn’t come along until 11 years ago, when two studies, one led by Riess and Brian Schmidt of Mount Stromlo Observatory, and the other by Saul Perlmutter of Lawrence Berkeley National Laboratory, discovered dark energy independently, in part with Hubble observations. Since then astronomers have been pursuing observations to better characterize dark energy.

Riess’s approach to narrowing alternative explanations for dark energy—whether it is a static cosmological constant or a dynamical field (like the repulsive force that drove inflation after the big bang)—is to further refine measurements of the universe’s expansion history.

Before Hubble was launched in 1990, the estimates of the Hubble constant varied by a factor of two. In the late 1990s the Hubble Space Telescope Key Project on the Extragalactic Distance Scale refined the value of the Hubble constant to an error of only about ten percent. This was accomplished by observing Cepheid variables at optical wavelengths out to greater distances than obtained previously and comparing those to similar measurements from ground-based telescopes.

The SHOES team used Hubble’s Near Infrared Camera and Multi-Object Spectrometer (NICMOS) and the Advanced Camera for Surveys (ACS) to observe 240 Cepheid variable stars across seven galaxies. One of these galaxies was NGC 4258, whose distance was very accurately determined through observations with radio telescopes. The other six galaxies recently hosted Type Ia supernovae that are reliable distance indicators for even farther measurements in the universe. Type Ia supernovae all explode with nearly the same amount of energy and therefore have almost the same intrinsic brightness.

By observing Cepheids with very similar properties at near-infrared wavelengths in all seven galaxies, and using the same telescope and instrument, the team was able to more precisely calibrate the luminosity of supernovae. With Hubble’s powerful capabilities, the team was able to sidestep some of the shakiest rungs along the previous distance ladder involving uncertainties in the behavior of Cepheids.

Riess would eventually like to see the Hubble constant refined to a value with an error of no more than one percent, to put even tighter constraints on solutions to dark energy.

Source: Space Telescope Science Institute

Dark Matter, Dark Energy; Now There’s “Dark Gulping”

[/caption]
For all you dark matter and dark energy fans out there, now there’s another new “dark” to add to the list. It’s called “dark gulping,” and it involves a process which may explain how supermassive black holes were able to form in the early universe. Astronomers from the University College of London (UCL) propose that dark gulping occurred when there were gravitational interactions between the invisible halo of dark matter in a cluster of galaxies and the gas embedded in the dark matter halo. This occurred when the Universe was less than a billion years old. They found that the interactions cause the dark matter to form a compact central mass, which can be gravitationally unstable, and collapse. The fast dynamical collapse is the dark gulping.

Dr. Curtis Saxton and Professor Kinwah Wu, both of UCL’s Mullard Space Science Laboratory, developed a model to study the process. They say that the dark gulping would have happened very rapidly, without a trace of electro-magnetic radiation being emitted.

There are several theories for how supermassive black holes form. One possibility is that a single large gas cloud collapses. Another is that a black hole formed by the collapse of a giant star swallows up enormous amounts of matter. Still another possibility is that a cluster of small black holes merge together. However, all these options take many millions of years and are at odds with recent observations that suggest that black holes were present when the Universe was less than a billion years old. Dark gulping may provide a solution to how the slowness of gas accretion was circumvented, enabling the rapid emergence of giant black holes. The affected dark mass in the compact core is compatible with the scale of supermassive black holes in galaxies today.

Dark matter appears to gravitationally dominate the dynamics of galaxies and galaxy clusters. However, there is still a great deal of conjecture about origin, properties and distribution of dark particles. While it appears that dark matter doesn’t interact with light, it does interacts with ordinary matter via gravity. “Previous studies have ignored the interaction between gas and the dark matter,” said Saxton, “but, by factoring it into our model, we’ve achieved a much more realistic picture that fits better with observations and may also have gained some insight into the presence of early supermassive black holes.”?

According to the model, the development of a compact mass at the core is inevitable. Cooling by the gas causes it to flow gently in towards the center. The gas can be up to 10 million degrees at the outskirts of the halos, which are few million light years in diameter, with a cooler zone towards the core, which surrounds a warmer interior a few thousand light years across. The gas doesn’t cool indefinitely, but reaches a minimum temperature, which fits well with X-ray observations of galaxy clusters.

The model also investigates how many dimensions the dark particles move in, as these determine the rate at which the dark halo expands and absorbs and emits heat, and ultimately affect the distribution of dark mass the system.

“In the context of our model, the observed core sizes of galaxy cluster halos and the observed range of giant black hole masses imply that dark matter particles have between seven and ten degrees of freedom,”?said Saxton. ?”With more than six, the inner region of the dark matter approaches the threshold of gravitational instability, opening up the possibility of dark gulping taking place.?

The findings have been published in the Monthly Notices of the Royal Astronomical Society.

Source: RAS

Cosmologists Search for Gravity Waves to Prove Inflation Theory

[/caption]

During the next decade, cosmologists will attempt to observe the first moments of the Universe, hoping to prove a popular theory. They’ll be searching for extremely weak gravity waves to measure primordial light, looking for convincing evidence for the Cosmic Inflation Theory, which proposes that a random, microscopic density fluctuation in the fabric of space and time gave birth to the Universe in a hot big bang approximately 13.7 billion years ago. A new instrument called a polarimeter is being attached to the South Pole Telescope (SPT), which operates at submillimeter wavelengths, between microwaves and the infrared on the electromagnetic spectrum. Einstein’s theory of general relativity predicts that Cosmic Inflation should produce the weak gravity waves.

Inflation Theory proposes a period of extremely rapid and exponential expansion of the Universe during its first few moments prior to the more gradual Big Bang expansion, during which time the energy density of the universe was dominated by a cosmological constant-type of vacuum energy that later decayed to produce the matter and radiation that fill the Universe today.

In 1979, physicist Alan Guth proposed the Cosmic Inflation Theory, which also predicts the existence of an infinite number of universes. Unfortunately, cosmologists have no way of testing that particular prediction.

The South Pole Telescope takes advantage of the clear, dry skies at the National  Science Foundation’s South Pole Station to study the cosmic background  radiation, the afterglow of the big bang. The SPT measures eight meters (26.4  feet) in diameter.  Photo by Jeff McMahon
The South Pole Telescope takes advantage of the clear, dry skies at the National Science Foundation’s South Pole Station to study the cosmic background radiation, the afterglow of the big bang. The SPT measures eight meters (26.4 feet) in diameter. Photo by Jeff McMahon

“Since these are separate universes, by definition that means we can never have any contact with them. Nothing that happens there has any impact on us,” said Scott Dodelson, a scientist at Fermi National Accelerator Laboratory and a Professor in Astronomy & Astrophysics at the University of Chicago.

But there is a way to probe the validity of cosmic inflation. The phenomenon would have produced two classes of perturbations. The first, fluctuations in the density of subatomic particles happen continuously throughout the universe, and scientists have already observed them.

“Usually they’re just taking place on the atomic scale. We never even notice them,” Dodelson said. But inflation would instantaneously stretch these perturbations into cosmic proportions. “That picture actually works. We can calculate what those perturbations should look like, and it turns out they are exactly right to produce the galaxies we see in the universe.”

The second class of perturbations would be gravity waves—Einsteinian distortions in space and time. Gravity waves also would get promoted to cosmic proportions, perhaps even strong enough for cosmologists to detect them with sensitive telescopes tuned to the proper frequency of electromagnetic radiation.

If the new polarimeter is sensitive enough, scientists should be able to detect the waves.

“If you detect gravity waves, it tells you a whole lot about inflation for our universe,” said John Carlstrom from the University of Chicago, who developed the new instrument. Carlstrom said detecting the waves would rule out various competing ideas for the origin of the universe. “There are fewer than there used to be, but they don’t predict that you have such an extreme, hot big bang, this quantum fluctuation, to start with,” he said. Nor would they produce gravity waves at detectable levels.

A simulation at this link portrays the distortions in space and time at the subatomic scale, the result of quantum fluctuations occurring continuously throughout the universe. Near the end of the simulation, cosmic inflation begins to stretch space-time to the cosmic proportions of the universe.

Cosmologists also use the SPT in their quest to solve the mystery of dark energy. A repulsive force, dark energy pushes the universe apart and overwhelms gravity, the attractive force exerted by all matter.
Dark energy is invisible, but astronomers are able to see its influence on clusters of galaxies that formed within the last few billion years.

NASA’s Wilkinson Microwave Anisotropy Probe collected data that produced this  chart of sound waves from the universe. Called a power spectrum, the chart  plots the cosmic microwave background radiation as ripples of different sizes  across the sky. The data are consistent with predictions of cosmic inflation  theory.  Courtesy of the WMAP Science Team
NASA’s Wilkinson Microwave Anisotropy Probe collected data that produced this chart of sound waves from the universe. Called a power spectrum, the chart plots the cosmic microwave background radiation as ripples of different sizes across the sky. The data are consistent with predictions of cosmic inflation theory. Courtesy of the WMAP Science Team

The SPT detects the cosmic microwave background (CMB) radiation, the afterglow of the big bang. Cosmologists have mined a fortune of data from the CMB, which represent the forceful drums and horns of the cosmic symphony. But now the scientific community has its ears cocked for the tones of a subtler instrument—gravitational waves—that underlay the CMB.

“We have these key components to our picture of the universe, but we really don’t know what physics produces any of them,” said Dodelson of inflation, dark energy and the equally mysterious dark matter. “The goal of the next decade is to identify the physics.”

Source: University of Chicago

Next-Generation Telescope Gets Team

 

[/caption]

Astronomy organizations in the United States, Australia and Korea have signed on to build the largest ground-based telescope in the world – unless another team gets there first. The Giant Magellan Telescope, or GMT, will have the resolving power of a single 24.5-meter (80-foot) primary mirror, which will make it three times more powerful than any of the Earth’s existing ground-based optical telescopes. Its domestic partners include the Carnegie Institution for Science, Harvard University, the Smithsonian Institution, Texas A & M University, the University of Arizona, and the University of Texas at Austin. Although the telescope has been in the works since 2003, the formal collaboration was announced Friday.

Charles Alcock, director of the Harvard-Smithsonian Center for Astrophysics, said the Giant Magellan Telescope is being designed to build on the legacy of a rash of smaller telescopes from the 1990s in California, Hawaii and Arizona. The existing telescopes have mirrors in the range of six to 10 meters (18 to 32 feet), and – while they’re making great headway in the nearby universe – they’re only able to make out the largest planets around other stars and the most luminous distant galaxies.

With a much larger primary mirror, the GMT will be able to detect much smaller and fainter objects in the sky, opening a window to the most distant, and therefore the oldest, stars and galaxies. Formed within the first billion years of the Big Bang, such objects reveal tantalizing insight into the universe’s infancy.

Earlier this year, a different consortium including the California Institute of Technology and the University of California, with Canadian and Japanese institutions, unveiled its own next-generation concept: the Thirty Meter Telescope. Whereas the GMT’s 24.5-meter primary mirror will come from a collection of eight smaller mirrors, the TMT will combine 492 segments to achieve the power of a single 30-meter (98-foot) mirror design.

In addition, the European Extremely Large Telescope is in the concept stage.

In terms of science, Alcock acknowledged that the two telescopes with US participation are headed toward redundancy. The main differences, he said, are in the engineering arena.

“They’ll probably both work,” he said. But Alcock thinks the GMT is most exciting from a technological point of view. Each of the GMT’s seven 8.4-meter primary segments will weigh 20 tons, and the telescope enclosure has a height of about 200 feet. The GMT partners aim to complete their detailed design within two years.

The TMT’s segmented concept builds on technology pioneered at the W.M. Keck Observatory in Hawaii, a past project of the Cal-Tech and University of California partnership.

Construction on the GMT is expected to begin in 2012 and completed in 2019, at Las Campanas Observatory in the Andes Mountains of Chile. The total cost is projected to be $700 million, with $130 million raised so far. 

Artists concept of the Thirty Meter Telescope Observatory. Credit: TMT
Artists concept of the Thirty Meter Telescope Observatory. Credit: TMT

Construction on the TMT could begin as early as 2011 with an estimated completion date of 2018. The telescope could go to Hawaii or Chile, and final site selection will be announced this summer. The total cost is estimated to be as high as $1 billion, with $300 million raised at last count.

 

Alcock said the next generation of telescopes is crucial for forward progress in 21st Century astronomy.

“The goal is to start discovering and characterizing planets that might harbor life,” he said. “It’s very clear that we’re going to need the next generation of telescopes to do that.”

And far from being a competition, the real race is to contribute to science, said Charles Blue, a TMT spokesman.

“All next generation observatories would really like to be up and running as soon as possible to meet the scientific demand,” he said.

In the shorter term, long distance space studies will get help from the James Webb Space Telescope, designed to replace the Hubble Space Telescope when it launches in 2013. And the Atacama Large Millimeter Array (ALMA), a large interferometer being completed in Chile, could join the fore by 2012.

Sources: EurekAlert and interviews with Charles Alcock, Charles Blue

Profiling Potential Supernovae

[/caption]

Just as psychologists and detectives try to “profile” serial killers and other criminals, astronomers are trying to determine what type of star system will explode as a supernova. While criminals can sometimes be caught or rehabilitated before they do the crime, supernovae, well, there’s no stopping them. But there’s the potential of learning a great deal in both astronomy and cosmology by theorizing about potential stellar explosions. At the American Astronomical Society meeting last week, Professor Bradley E. Schaefer of Louisiana State University, Baton Rouge, discussed how searching through old astronomical archives can produce unique and front-line science about supernovae – as well as providing information about dark energy — in ways that no combination of modern telescopes can provide. Additionally, Schaefer said amateur astronomers can help in the search, too.

Schaefer has been studying archived data back to 1890. “Archival data is the only way to see the long-term behavior of stars, unless you want to keep watch nightly for the next century, and this is central to many front-line astronomy questions,” he said.

Bradley E. Schaefer of Louisiana State University, Baton Rouge
Bradley E. Schaefer of Louisiana State University, Baton Rouge

The main question Schaefer is trying answer is what stars are progenitors for type Ia supernovae. Astronomers have been trying to track down this mystery for over 40 years.

Type Ia supernovae are remarkably bright but also remarkably uniform in their brightness, and therefore are regarded as the best astronomical “standard candles” for measurement across cosmological distances. Type Ia supernovae are also key to the search for dark energy. These blasts have been used as distance markers for measuring how fast the Universe is expanding.

However, a potential problem is that distant supernovae might be different from nearby events, thus confounding the measures. Schaefer said the only way to solve this problem is to identify the type of stars that explode as Type Ia supernovae so that corrections can be calculated. “The upcoming big-money supernova-cosmology programs require the answer to this problem for them to achieve their goal of precision cosmology,” said Schaefer.

Supernova 1994D in the outskirts of the galaxy NGC 4526.
Supernova 1994D in the outskirts of the galaxy NGC 4526.

Many types of star systems have been proposed as being the potential supernovae, such as double white dwarf binaries which were not discovered until 1988, and symbiotic stars which are very rare. But the most promising progenitor is the recurrent novae (RN) which are usually binary systems with matter flowing off a companion star onto a white dwarf. The matter accumulates onto the white dwarf’s surface until the pressure gets high enough to trigger a thermonuclear reaction (like an H-bomb). RNs can have multiple eruptions every century (as opposed to classical novae which have only one observed eruption).

To answer the question if RN are supernova progenitors, Schaefer conducted extensive research to get RN orbital periods, accretion rates, outburst dates, eruption light curves, and the average magnitudes between outbursts.

Artists impression of a recurrent nova.
Artists impression of a recurrent nova.

One big question was whether there were enough RN occurrences to supply the observed rate of supernovae. Another question was if the nova eruption itself blows off more material than is accumulated between eruptions, so the white dwarf would not be gaining mass.

In looking at the old sky photos, he was able count all the discovered eruptions and measure the frequency of RN eruptions back to 1890. He could also measure the mass ejected during an eruption by measuring eclipse times on the archived photos, and then looking at the change in the orbital period across an eruption.

In doing so, Schaefer was able to answer both questions: There was enough RN occurrences to provide sources for the observed Type Ia supernovae rate. “With 10,000 recurrent novae in our Milky Way, their numbers are high enough to account for all of the Type Ia supernovae,” he said.

He also found the mass of the white dwarf is increasing and its collapse will occur within a million years or so, and cause a Type Ia supernova.

Schaefer concluded that roughly one-third of all ‘classical novae’ are really RNe with two-or-more eruptions in the last century.

With this knowledge, astronomical theorists can now perform the calculations to make subtle corrections in using supernovae to measure the Universe’s expansion, which may help the search for dark energy.

An important result from this archival search is the prediction of a RN that will erupt at any time. An RN named U Scorpii (U Sco) is ready to “blow,” and already a large worldwide collaboration (dubbed ‘USCO2009’) has been formed to make concentrated observations (in x-ray, ultraviolet, optical, and infrared wavelengths) of the upcoming event. This is the first time that a confident prediction has identified which star will go nova and which year it will blow up in.

During this search Schaefer also discovered one new RN (V2487 Oph), six new eruptions, five orbital periods, and two mysterious sudden drops in brightness during eruptions.

Another discovery is that the nova discovery efficiency is “horrifyingly low,” Schaefer said, being typically 4%. That is, only 1-out-of-25 novae are ever spotted. Schaefer said this is an obvious opportunity for amateur astronomers to use digital cameras to monitor the sky and discover all the missing eruptions.

Photo archive at Harvard.  Credit: Ashley Pagnotta
Photo archive at Harvard. Credit: Ashley Pagnotta

Schaefer used archives from around the world, with the two primary archives being the Harvard College Observatory in Boston, Massachusetts and at the headquarters of the American Association of Variable Star Observers (AAVSO) in Cambridge, Massachusetts. Harvard has a collection of half-a-million old sky photos covering the entire sky with 1000-3000 pictures of each star going back to 1890. The AAVSO is the clearinghouse for countless measures of star brightness by many thousands of amateurs worldwide from 1911 to present.

Source: Louisiana State University, AAS meeting press conference

More Evidence Earth is Not Center of Universe

[/caption]
If you’re certain the Universe revolves around you, I have some bad news for you. Researchers from the University of British Columbia say Earth’s location in the Universe is utterly unremarkable, despite recent theories that propose Earth is at the center of a giant void in space. A decade ago, it was discovered the Universe’s expansion was accelerating. This continually expanding Universe was attributed to dark energy, the highly repulsive and mysterious stuff that has yet to be detected. But some scientists came up with an alternate theory where Earth was near the centre of a giant void or bubble, mostly empty of matter. But new calculations solidify the case that dark energy permeates the cosmos.

While dark energy sometimes seems pretty far-fetched – with its mysterious and so far undetectable properties – the alternate “void” theory of why the Universe is ever-expanding contains a problem, in that it violates the long held Copernican Principle.

Polish astronomer Nicolaus Copernicus’s 1543 book, On the Revolutions of the Heavenly Spheres, moved Earth from being the center of the Universe to just another planet orbiting the Sun. Since then, astronomers have extended the idea and formed the Copernican Principle, which says that our place in the Universe as a whole is completely ordinary. Although the Copernican Principle has become a pillar of modern cosmology, finding conclusive evidence that our neighborhood of the Universe really isn’t special has proven difficult.

Nicolaus Copernicus
Nicolaus Copernicus

In 1998, studies of distant explosions called “type Ia supernovae” indicated that the expansion of the Universe is accelerating, an observation attributed to the repulsive force of a mysterious “dark energy.” But some cosmologist proposed that Earth was at the center of a void, and that gravity would create the illusion of acceleration, mimicking the effect of dark energy on the supernova observations.

Now some advanced analysis and modeling performed by UBC post-doctoral fellows Jim Zibin and Adam Moss and Astronomy Prof. Douglas Scott is showing that this alternate “void theory” just doesn’t add up.

The researchers used data from the Wilkinson Microwave Anisotropy Probe satellite, which includes members from UBC on its international team, as well as data from various ground-based instruments and surveys.

“We tested void models against the latest data, including subtle features in the cosmic microwave background radiation – the afterglow of the Big Bang – and ripples in the large-scale distribution of matter,” says Zibin. “We found that void models do a very poor job of explaining the combination of these data.”

The team’s calculations instead solidify the conventional view that an enigmatic dark energy fills the cosmos and is responsible for the acceleration of the Universe. “Recent advances in data collection have brought us to the era of precision cosmology,” says Zibin. “Void models are terrible at explaining the new data, but the standard dark energy model works very well.

“Since we can only observe the Universe from Earth, it’s really hard to determine if we’re in a ‘special place,'” says Zibin. “But we’ve now learned that our location is much more ordinary than the strange dark energy that fills the Universe.”

The team’s research is available at Physical Review Letters

Source: EurekAlert

No “Big Rip” in our Future: Chandra Provides Insights Into Dark Energy

[/caption]
When you throw a ball up into the air, you expect gravity will eventually slow the ball, and it will come back down again. But what if you threw a ball up into the air and instead of coming back down, it accelerated away from you? That’s basically what is happening with our universe: everything is accelerating away from everything else. This acceleration was discovered in 1998, and scientists believe “dark energy” is responsible, a form of repulsive gravity, and it composes a majority of the universe, about 72%. We don’t know what it is yet, but now, for the first time, astronomers have clearly seen the effects of dark energy. Using the Chandra X-ray Observatory, scientists have tracked how dark energy has stifled the growth of galaxy clusters. Combining this new data with previous studies, scientists have obtained the best clues yet about what dark energy is, confirming its existence. And there’s good news, too: the expanding Universe won’t rip itself apart.

Previous methods of dark energy research measured Type Ia supernovae. The new X-ray results provide a crucial independent test of dark energy, long sought by scientists, which depends on how gravity competes with accelerated expansion in the growth of cosmic structures.

“This result could be described as ‘arrested development of the universe’,” said Alexey Vikhlinin of the Smithsonian Astrophysical Observatory in Cambridge, Mass., who led the research. “Whatever is forcing the expansion of the universe to speed up is also forcing its development to slow down.”

Vikhlinin and his colleagues used Chandra to observe the hot gas in dozens of galaxy clusters, which are the largest collapsed objects in the universe. Some of these clusters are relatively close and others are more than halfway across the universe.

The results show the increase in mass of the galaxy clusters over time aligns with a universe dominated by dark energy. It is more difficult for objects like galaxy clusters to grow when space is stretched, as caused by dark energy. Vikhlinin and his team see this effect clearly in their data. The results are remarkably consistent with those from the distance measurements, revealing general relativity applies, as expected, on large scales.

Previously, it wasn’t known for sure if dark energy was a constant across space, with a strength that never changes with distance or time, or if it is a function of space itself and as space expands dark energy would expand and get stronger. In other words, it wasn’t known if Einstein’s theory of general relativity and his cosmological constant was correct or if the theory would have to be modified for large scales.

But the Chandra study strengthens the evidence that dark energy is the cosmological constant, and is not growing in strength with time, which would cause the Universe to eventually rip itself apart.

“Putting all of this data together gives us the strongest evidence yet that dark energy is the cosmological constant, or in other words, that ‘nothing weighs something’,” said Vikhlinin. “A lot more testing is needed, but so far Einstein’s theory is looking as good as ever.”

These results have consequences for predicting the ultimate fate of the universe. If dark energy is explained by the cosmological constant, the expansion of the universe will continue to accelerate, and everything will disappear from sight of the Milky Way and its gravitationally bound neighbor galaxy, Andromeda. This won’t happen soon, but Vikhlinin said, “Double the age of Universe from today, and you will see strong affect. An astronomer would say this may be a good time to fund cosmological research because further down the road there will be nothing to observe!”

Vikhlinin’s paper can be found here.

Source: Chandra Press Release, press conference

‘Laser Comb’ To Measure the Accelerating Universe

Back in April, UT published an article about using a device called a ‘laser comb’ to search for Earth-like planets. But astronomers also hope to use the device to search for dark energy in an ambitious project that would measure the velocities of distant galaxies and quasars over a 20-year period. This would let astronomers test Einstein’s theory of general relativity and the nature of the mysterious dark energy. The device uses femto-second (one millionth of one billionth of a second) pulses of laser light coupled with an atomic clock to provide a precise standard for measuring wavelengths of light. Also known as an “astro-comb,” these devices should give astronomers the ability to use the Doppler shift method with incredible precision to measure spectral lines of starlight up to 60 times greater than any current high-tech method. Astronomers have been testing the device, and hope to use one in conjunction with the new Extremely Large Telescope which is being designed by ESO, the European Southern Observatory.

Astronomers use instruments called spectrographs to spread the light from celestial objects into its component colors, or frequencies, in the same way water droplets create a rainbow from sunlight. They can then measure the velocities of stars, galaxies and quasars, search for planets around other stars, or study the expansion of the Universe. A spectrograph must be accurately calibrated so that the frequencies of light can be correctly measured. This is similar to how we need accurate rulers to measure lengths correctly. In the present case, a laser provides a sort of ruler, for measuring colors rather than distances, with an extremely accurate and fine grid.

New, extremely precise spectrographs will be needed in experiments planned for the future Extremely Large Telescope.

“We’ll need something beyond what current technology can offer, and that’s where the laser frequency comb comes in. It is worth recalling that the kind of precision required, 1 cm/s, corresponds, on the focal plane of a typical high-resolution spectrograph, to a shift of a few tenths of a nanometre, that is, the size of some molecules,” explains PhD student and team member Constanza Araujo-Hauck from ESO.

The new calibration technique comes from the combination of astronomy and quantum optics, in a collaboration between researchers at ESO and the Max Planck Institute for Quantum Optics. It uses ultra-short pulses of laser light to create a ‘frequency comb’ – light at many frequencies separated by a constant interval – to create just the kind of precise ‘ruler’ needed to calibrate a spectrograph.

The device has been tested on a solar telescope, a new version of the system is now being built for the HARPS planet-finder instrument on ESO’s 3.6-metre telescope at La Silla in Chile, before being considered for future generations of instruments.

More information on laser combs.

Source: ESO