Variability in Type 1A Supernovae Has Implications for Studying Dark Energy

by Nancy Atkinson on August 12, 2009

Want to stay on top of all the space news? Follow @universetoday on Twitter

SN 1994D, a type Ia supernova in the galaxy NGC 4526

SN 1994D, a type Ia supernova in the galaxy NGC 4526

The discovery of dark energy, a mysterious force that is accelerating the expansion of the universe, was based on observations of type 1a supernovae, and these stellar explosions have long been used as “standard candles” for measuring the expansion. But not all type 1A supernovae are created equal. A new study reveals sources of variability in these supernovae, and to accurately probe the nature of dark energy and determine if it is constant or variable over time, scientists will have to find a way to measure cosmic distances with much greater precision than they have in the past.

“As we begin the next generation of cosmology experiments, we will want to use type 1a supernovae as very sensitive measures of distance,” said lead author Daniel Kasen, of a study published in Nature this week. “We know they are not all the same brightness, and we have ways of correcting for that, but we need to know if there are systematic differences that would bias the distance measurements. So this study explored what causes those differences in brightness.”

Kasen and his coauthors–Fritz Röpke of the Max Planck Institute for Astrophysics in Garching, Germany, and Stan Woosley, professor of astronomy and astrophysics at UC Santa Cruz–used supercomputers to run dozens of simulations of type 1a supernovae. The results indicate that much of the diversity observed in these supernovae is due to the chaotic nature of the processes involved and the resulting asymmetry of the explosions.

For the most part, this variability would not produce systematic errors in measurement studies as long as researchers use large numbers of observations and apply the standard corrections, Kasen said. The study did find a small but potentially worrisome effect that could result from systematic differences in the chemical compositions of stars at different times in the history of the universe. But researchers can use the computer models to further characterize this effect and develop corrections for it.

A type 1a supernova occurs when a white dwarf star acquires additional mass by siphoning matter away from a companion star. When it reaches a critical mass–1.4 times the mass of the Sun, packed into an object the size of the Earth–the heat and pressure in the center of the star spark a runaway nuclear fusion reaction, and the white dwarf explodes. Since the initial conditions are about the same in all cases, these supernovae tend to have the same luminosity, and their “light curves” (how the luminosity changes over time) are predictable.

Some are intrinsically brighter than others, but these flare and fade more slowly, and this correlation between the brightness and the width of the light curve allows astronomers to apply a correction to standardize their observations. So astronomers can measure the light curve of a type 1a supernova, calculate its intrinsic brightness, and then determine how far away it is, since the apparent brightness diminishes with distance (just as a candle appears dimmer at a distance than it does up close).

The computer models used to simulate these supernovae in the new study are based on current theoretical understanding of how and where the ignition process begins inside the white dwarf and where it makes the transition from slow-burning combustion to explosive detonation.

The simulations showed that the asymmetry of the explosions is a key factor determining the brightness of type 1a supernovae. “The reason these supernovae are not all the same brightness is closely tied to this breaking of spherical symmetry,” Kasen said.

The dominant source of variability is the synthesis of new elements during the explosions, which is sensitive to differences in the geometry of the first sparks that ignite a thermonuclear runaway in the simmering core of the white dwarf. Nickel-56 is especially important, because the radioactive decay of this unstable isotope creates the afterglow that astronomers are able to observe for months or even years after the explosion.

“The decay of nickel-56 is what powers the light curve. The explosion is over in a matter of seconds, so what we see is the result of how the nickel heats the debris and how the debris radiates light,” Kasen said.

Kasen developed the computer code to simulate this radiative transfer process, using output from the simulated explosions to produce visualizations that can be compared directly to astronomical observations of supernovae.

The good news is that the variability seen in the computer models agrees with observations of type 1a supernovae. “Most importantly, the width and peak luminosity of the light curve are correlated in a way that agrees with what observers have found. So the models are consistent with the observations on which the discovery of dark energy was based,” Woosley said.

Another source of variability is that these asymmetric explosions look different when viewed at different angles. This can account for differences in brightness of as much as 20 percent, Kasen said, but the effect is random and creates scatter in the measurements that can be statistically reduced by observing large numbers of supernovae.

The potential for systematic bias comes primarily from variation in the initial chemical composition of the white dwarf star. Heavier elements are synthesized during supernova explosions, and debris from those explosions is incorporated into new stars. As a result, stars formed recently are likely to contain more heavy elements (higher “metallicity,” in astronomers’ terminology) than stars formed in the distant past.

“That’s the kind of thing we expect to evolve over time, so if you look at distant stars corresponding to much earlier times in the history of the universe, they would tend to have lower metallicity,” Kasen said. “When we calculated the effect of this in our models, we found that the resulting errors in distance measurements would be on the order of 2 percent or less.”

Further studies using computer simulations will enable researchers to characterize the effects of such variations in more detail and limit their impact on future dark-energy experiments, which might require a level of precision that would make errors of 2 percent unacceptable.

Source: EurekAlert

About 

Nancy Atkinson is Universe Today's Senior Editor. She also is the host of the NASA Lunar Science Institute podcast and works with Astronomy Cast. Nancy is also a NASA/JPL Solar System Ambassador.

DrFlimmer August 14, 2009 at 3:08 PM

Nereid, you want us to turn to philosophy, don’t you?
Since it’s late and there is a beautiful clear sky out there, only a short note:
I wonder what an experimentalist would say about that everything in physics is theoretical ;) .
However, you are right, of course. Physicists always try to put everything into a nice beautiful mathematical formula. So physics is mathematical “by definition”, and it is right to be so, because mathematics is the one thing we can trust that it will always hold. And not just in time, also in space. Mathematics is true everywhere.
If there are aliens out there and they are clever enough to think about mathematics, they will find the same rules.
That is useful in physics. We have a ground upon which we can build our theories. We assume that all our physical rules hold in the entire universe. At least the ground does – a good start, I’d say.

Manu August 14, 2009 at 5:10 PM

“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.”

Eugene Wigner, of course, in his famous article “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” (see Wikip).

Nereid: interesting example about gravitation. It goes further back: Descartes developed his idea of space-filling ‘vortices’ (is it the right English word? ‘tourbillons’ in French) because he abhorred the idea of distant action.
Newton and Maxwell sent the swing back to distant action, Einstein and quantum physics bring it again to Descartes’ side.
We haven’t seen the end of the swinging, I guess.

Nereid August 14, 2009 at 6:04 PM

I guess there’s a philosophical aspect, and I certainly enjoy a good science/philosophy discussion!

However, I also want to draw attention to the fact (hah!) that so much of what we treat as fact (hah squared!!) is, in fact (well, you get the idea), merely a really brief, math-based, description that has astonishing explanatory and predictive powers (thanks Manu, Wigner is *exactly* one of the sources I had in mind) …

If one is interested only (or mostly) in ‘observations’ (or, more generally, observables), then who cares if gravity is an instantaneous action at a distance, geometry, or mischievous invisible pink fairies (provided it’s 100% reliable, objective, repeatable, independently verifiable, etc, etc, etc)?

In this respect, what’s the difference between CDM and quarks (say)?

DrFlimmer August 15, 2009 at 5:45 AM

I guess there’s a philosophical aspect, and I certainly enjoy a good science/philosophy discussion!

Oh, yes! As a physicist you have to be a little philosopher, especially if you enjoy the extreams of physics (cosmology and QM). And sometimes it is worth the effort to start such philosophical discussions about physics. I think, in some ways it brings us back down on earth. ;)

In some cases you are right, Nereid, that our facts are, in fact (I like the way you said it ;) ), mathematicla descriptions that seem to give correct interpretations of what’s going on. CDM is such a case.
On the other hand, there are cases where the facts have gone beyond a “pure mathematical” description. Quarks, to use your example, are real objects. At least there are objects in our universe that behave as what we mathematically describe as quarks. And these objects have revealed themselves, we can see and detect them, we can almost put our hand on them. So things are, in fact, real!
And I think, this is different from what we can say about CDM.

Concerning “observation”.
That is the point, physics not only want to observe, they want to predict.
I can observe that the Moon flies around the earth in about 28 days. This meets your mentioned criteria and I need not care why he does it – it could be due to pink unicorns or geometry, it doesn’t matter. That would be “pure observation”.
Of course, I can also make the prediction that the moon will be at (almost) the same position again in 28 days.
But this is not what we want. Because our knowledge that the moon will be back again in 28 days does not apply to Mars (e.g.). We will see that Mars reappears in so-and-so-many days (sorry, I don’t know and don’t want to look up). So, probably the pink unicorns are at work here, too, and let Mars reappear after a longer time-period.
That would be “observation-only”, somehow like Anaconda wishes it ;) .

To make predictions we not only need to know when something will reappear in the skies, but also why. We could assume, like Newton did (or Kepler before him), that both observations could be due to the same cause. And what we find is a beautiful law that only depends on the distance, and we can apply it to other objects in the solar system and see, well, probably there is something that governs the motion of the planets. Let’s call it gravity (we could name it “pink unicorns”, but why?).

Probably there is some sense in these many words of mine.
In short, there is a difference between pure observation, prediction, pure mathematics and (well) observation.
Pure things won’t do anything. Pure observation without the power of forecast is as useless as pure mathematical works without any observation that shows if the work has anything to do with (what we call) reality.
There is a difference between CDM and Quarks, as I might have shown.
As a little prediction ( ;) ): The future will tell, if this difference can be overcome….

Manu August 15, 2009 at 6:04 AM

DrFlimmer: interesting example, that.
“Quarks, to use your example, are real objects. At least there are objects in our universe that behave as what we mathematically describe as quarks. And these objects have revealed themselves, we can see and detect them, we can almost put our hand on them.”
Well I’m not so sure…
You’re aware of course, that quarks were in the first place a math concept which was helpful in building the Standard Model, and that their inventors (Gell-Mann and Zweig) were quite surprised to actually ‘see’ them come to some reality.
Still, how ‘real’ is a particle that it is, precisely, impossible to detect *alone*? That in fact can’t *exist* alone?
We can only see them by 2s or 3s, and have to indirectly deduct their existence from observations of these clusters.
The same goes in fact for much of modern physics.
I think the more we delve down into the elementary, the more blurred familiar concepts such as *reality* become, and the more *real* abstract math concepts become: they might well be one and the same, at the bottom (provided there is a bottom).

I for one dream of an ultimate physics where the elementary object, whatever it is, and the law that governs it, are one and only thing!
Does this even *mean* anything? ;-)

DrFlimmer August 15, 2009 at 8:43 AM

You have earned a point, Manu.

Still, they are there. As are “virtual particles”. The proton has a mass of about 936 MeV (assuming c=1, natural units – Damn, I hate them!). The three valence-quarks add up to a mass of about 15 MeV, IIRC. The rest is in relativistic effects, gluons and in virtual particles that just pop into and out of existence.
That those quantum fluctuations are real is shown by the “Casimir-effect”: Two plates (neutral, of course) close by in a vaccum. They feel an “attraction” towards each other, because on the outside there are more virtual particles (possible) than between the plates. This is crazy and weird, but is experimentally varified.
So what is really real and what is possibly real? And when do we say we “found” quarks and CDM and when do we say these “theoretical” objects can possibly explain what’s happening in this strange universe?
Now, you (and others) again ;)

Btw: Your dream is the same that everyone dreams of: A great unified theory, or GUT for short.
It is my opinion that the solution should be quite simple – maybe we don’t have the (mathematical) tools and the skills to find it, but maybe we will…… and maybe we won’t….

Comments on this entry are closed.

Previous post:

Next post: