This event not only confirmed a century-old prediction made by Einstein’s Theory of General Relativity, it also led to a revolution in astronomy. It also stoked the hopes of some scientists who believed that black holes could account for the Universe’s “missing mass”. Unfortunately, a new study by a team of UC Berkeley physicists has shown that black holes are not the long-sought-after source of Dark Matter.
Since the early 20th century, scientists and physicists have been burdened with explaining how and why the Universe appears to be expanding at an accelerating rate. For decades, the most widely accepted explanation is that the cosmos is permeated by a mysterious force known as “dark energy”. In addition to being responsible for cosmic acceleration, this energy is also thought to comprise 68.3% of the universe’s non-visible mass.
Much like dark matter, the existence of this invisible force is based on observable phenomena and because it happens to fit with our current models of cosmology, and not direct evidence. Instead, scientists must rely on indirect observations, watching how fast cosmic objects (specifically Type Ia supernovae) recede from us as the universe expands.
This process would be extremely tedious for scientists – like those who work for the Dark Energy Survey (DES) – were it not for the new algorithms developed collaboratively by researchers at Lawrence Berkeley National Laboratory and UC Berkeley.
“Our algorithm can classify a detection of a supernova candidate in about 0.01 seconds, whereas an experienced human scanner can take several seconds,” said Danny Goldstein, a UC Berkeley graduate student who developed the code to automate the process of supernova discovery on DES images.
Currently in its second season, the DES takes nightly pictures of the Southern Sky with DECam – a 570-megapixel camera that is mounted on the Victor M. Blanco telescope at Cerro Tololo Interamerican Observatory (CTIO) in the Chilean Andes. Every night, the camera generates between 100 Gigabytes (GB) and 1 Terabyte (TB) of imaging data, which is sent to the National Center for Supercomputing Applications (NCSA) and DOE’s Fermilab in Illinois for initial processing and archiving.
Object recognition programs developed at the National Energy Research Scientific Computing Center (NERSC) and implemented at NCSA then comb through the images in search of possible detections of Type Ia supernovae. These powerful explosions occur in binary star systems where one star is a white dwarf, which accretes material from a companion star until it reaches a critical mass and explodes in a Type Ia supernova.
“These explosions are remarkable because they can be used as cosmic distance indicators to within 3-10 percent accuracy,” says Goldstein.
Distance is important because the further away an object is located in space, the further back in time it is. By tracking Type Ia supernovae at different distances, researchers can measure cosmic expansion throughout the universe’s history. This allows them to put constraints on how fast the universe is expanding and maybe even provide other clues about the nature of dark energy.
“Scientifically, it’s a really exciting time because several groups around the world are trying to precisely measure Type Ia supernovae in order to constrain and understand the dark energy that is driving the accelerated expansion of the universe,” says Goldstein, who is also a student researcher in Berkeley Lab’s Computational Cosmology Center (C3).
The DES begins its search for Type Ia explosions by uncovering changes in the night sky, which is where the image subtraction pipeline developed and implemented by researchers in the DES supernova working group comes in. The pipeline subtracts images that contain known cosmic objects from new images that are exposed nightly at CTIO.
Each night, the pipeline produces between 10,000 and a few hundred thousand detections of supernova candidates that need to be validated.
“Historically, trained astronomers would sit at the computer for hours, look at these dots, and offer opinions about whether they had the characteristics of a supernova, or whether they were caused by spurious effects that masquerade as supernovae in the data. This process seems straightforward until you realize that the number of candidates that need to be classified each night is prohibitively large and only one in a few hundred is a real supernova of any type,” says Goldstein. “This process is extremely tedious and time-intensive. It also puts a lot of pressure on the supernova working group to process and scan data fast, which is hard work.”
To simplify the task of vetting candidates, Goldstein developed a code that uses the machine learning technique “Random Forest” to vet detections of supernova candidates automatically and in real-time to optimize them for the DES. The technique employs an ensemble of decision trees to automatically ask the types of questions that astronomers would typically consider when classifying supernova candidates.
At the end of the process, each detection of a candidate is given a score based on the fraction of decision trees that considered it to have the characteristics of a detection of a supernova. The closer the classification score is to one, the stronger the candidate. Goldstein notes that in preliminary tests, the classification pipeline achieved 96 percent overall accuracy.
“When you do subtraction alone you get far too many ‘false-positives’ — instrumental or software artifacts that show up as potential supernova candidates — for humans to sift through,” says Rollin Thomas, of Berkeley Lab’s C3, who was Goldstein’s collaborator.
He notes that with the classifier, researchers can quickly and accurately strain out the artifacts from supernova candidates. “This means that instead of having 20 scientists from the supernova working group continually sift through thousands of candidates every night, you can just appoint one person to look at maybe few hundred strong candidates,” says Thomas. “This significantly speeds up our workflow and allows us to identify supernovae in real-time, which is crucial for conducting follow up observations.”
“Using about 60 cores on a supercomputer we can classify 200,000 detections in about 20 minutes, including time for database interaction and feature extraction.” says Goldstein.
Goldstein and Thomas note that the next step in this work is to add a second-level of machine learning to the pipeline to improve the classification accuracy. This extra layer would take into account how the object was classified in previous observations as it determines the probability that the candidate is “real.” The researchers and their colleagues are currently working on different approaches to achieve this capability.
According to Wikipedia, a journal club is a group of individuals who meet regularly to critically evaluate recent articles in scientific literature. Being Universe Today if we occasionally stray into critically evaluating each other’s critical evaluations, that’s OK too. And of course, the first rule of Journal Club is… don’t talk about Journal Club.
So, without further ado – today’s scheduled-for-demolition journal article is about the ongoing problem of figuring out what events precede a Type 1a supernova.
There is growing interest about the nature of the events that precede Type 1a supernovae. We are confident that the progenitor stars of Type 1a supernovae are white dwarfs – but these stars have generally very long lives, making it difficult to identify stars that are potentially on the brink of exploding.
We are also confident that something happens to cause a white dwarf to accumulate extra mass until it reached its Chandrasekhar limit (around 1.4 solar masses, depending on the star’s spin).
For a long time, it had been assumed that a Type 1a supernova probably arose from a binary star system with a white dwarf and another star that had just evolved into a red giant, its outer layers swelling out into the gravitational influence of the white dwarf star, This new material was accreted onto the white dwarf until it hit its Chandrasekhar limit – and then kabloowie.
However, the white-dwarf-red-giant-binary hypothesis is currently falling out of favour. It has always had the problem that any Type 1 supernovae has, by definition, almost no hydrogen absorption lines in its light spectrum – which makes sense for a Type 1a supernovae arising from a hydrogen-expended white dwarf – but then what happened to the new material supposedly donated by a red giant partner (which should have been mostly hydrogen)?
Also, the recently discovered Type 1a SN2011fe was observed just as its explosion was commencing, allowing constraints to be placed on the nature of its progenitor system. Apparently there is no way the system could have included something as big as a red giant and so the next most likely cause is the merging (or collision) of two white dwarfs.
Other modelling research has also concluded that the two white dwarf merger scenario maybe statistically more likely to take place than the red giant accretion scenario – since the latter requires a lot of Goldilocks parameters (where everything has to be just right for a Type 1a to eventuate).
This latest paper expands the possible scenarios under which a two white dwarf merger could produce a Type 1a supernovae – and finds a surprising number of variations with respect to mass, chemistry and the orbital proximities of each star. Of course, it is just modelling but it does challenge the current assertion at the relevant Wikipedia entry that white dwarf mergers are a second possible, but much less likely, mechanism for Type 1a supernovae formation.
So – comments? Anyone want to defend the old red-giant-white-dwarf scenario? Does computer modelling count as a form of evidence? Want to suggest an article for the next edition of Journal Club?
Given the importance of Type 1a supernovae as the standard candles which demonstrate that the universe’s expansion is actually accelerating – we require a high degree of confidence that those candles really are standard.
A paper released on Arxiv, with a list of authors reading like a Who’s Who in cosmology and including all three winners of this year’s Nobel Prize in Physics, details an ultraviolet (UV) analysis of four Type 1a supernovae, three of which represent significant outliers from the standard light curve expected of Type 1a supernovae.
Some diversity in UV output has already been established from observing distant high red-shift Type 1a supernovae, since their UV output is shifted into optical light and can hence be observed through the atmosphere. However, to gain detailed observations in UV, you need to look at closer, less red-shifted Type 1a supernovae and hence you need space telescopes. These researchers used data collected by the ACS (Advanced Camera for Surveys) on the Hubble Space Telescope.
The supernovae studied were SN 2004dt, SN 2004ef, SN 2005M and SN 2005cf. SN 2005cf is considered a ‘gold standard’ Type 1a supernovae – while the other three show considerable diversion from the standard UV light curve, even though their optical light output looks standard.
The researchers also looked at a slightly larger dataset of UV supernovae observations made by the Swift spacecraft – which also showed a similar diversity in UV light, that was not apparent in optical light.
This is a bit of a worry, since the supernovae dataset from which we conclude that the universe is expanding is largely based on observations in optical light which, unlike UV, can make it through the atmosphere and be collected by ground-based telescopes.
Nonetheless, if you are thinking that three outliers isn’t a lot – you’d be right. The paper’s aim is to indicate that there are minor discrepancies in the current data set upon which we have built our current model of the universe. The academic muscle that is focused on this seemingly minor issue is some indication of the importance of isolating and characterising the nature any such discrepancies, so that we can continue to have confidence in the Type 1a supernovae standard candle dataset – or not.
The researchers acknowledge that the UV excess – not seen at all in SN 2005cf, but seen in varying degrees in the other three Type 1a supernovae – with the most pronounced difference seen in SN 2004dt – is a problem, even if it is not a huge problem.
As standard candles, Type 1a supernovae (or SNe1a) are key to determining the distance of their host galaxies. But one key consideration in determining their absolute luminosity is the reddening caused by the dust in the host galaxy. A higher than expected UV flux in some SNe1a could lead to an underestimate of this normal reddening effect, which dims the visible light of the star irrespective of its distance. Such an atypical SNe1a would then be picked up in ground-based SNe1a sky surveys as misleadingly dim – and their host galaxies would be determined as being further away from us than they really are.
The researchers call this another possible systematic error within the current SNe1a-based calculations of the nature of the universe – those other possible systematic errors including the metallicity of the supernovae themselves, as well as the size, density and chemistry of their host galaxy.
The key question to take forward now is what proportion of the total population of SNe1a in the universe might have this high UV flux. To answer that we will need to get more space telescope data.