When it comes to cosmic eye candy, planetary nebulae are at the top of the candy bowl. Like fingerprints—or maybe fireworks displays—each one is different. What factors are at work to make them so unique from one another?Continue reading “Each Planetary Nebula is Unique. Why Do They Look So Different?”
When low to medium-mass stars exhaust their supply of hydrogen, they exit their main sequence phase and expand to become red giants – what is known as the Asymptotic Giant Branch (AGB) phase. Stars in this phase of their evolution become variable (experiences changes in brightness) to shed their outer lays, spreading dust throughout the interstellar medium (ISM) that is crucial to the development of planetary nebulas and protoplanetary systems. For decades, astronomers have sought to better understand the role Red Giant stars play.
Studying interstellar and protoplanetary dust is difficult because it is so faint in visible light. Luckily, this dust absorbs light and radiates brightly in the infrared (IR), making it visible to IR telescopes. Using archival data from now-retired Akari and Wide-field Infrared Survey Explorer (WISE) missions, a team of Japanese astronomers conducted the first long-period survey of dusty AGBs and observed that the variable intensity of these stars coincides with the amount of dust they produce. Since this dust plays an important role in the formation of planets, this study could shed light on the origins of life.Continue reading “Twinkling Stars Supply the Dust That Leads to Life”
When stars like our Sun exhaust their hydrogen fuel, they enter what is known as their Red-Giant-Branch (RGB) phase. This is characterized by the star expanding to several times it original size, after which they shed their outer layers and become compact white dwarfs. Over the next few billion years, it is believed that these stars will slowly consume any objects and dust rings still close enough to be influenced by their gravity.
However, a citizen scientist named Melina Thévenot recently made a surprising discovery when observing a white dwarf system. Based on data from the Wide-field Infrared Survey Explorer (WISE) mission, this star has been a white dwarf for billions of years, but still has multiple rings of dust around it. Known as LSPM J0207+3331 (or J0207), this discovery could force researchers to reconsider models of planetary systems.Continue reading “The Oldest and Coldest White Dwarf Ever Found has Bizarre Dust Rings Around it”
Quick, what’s the reddest star visible to the naked eye?
Depending on your sky conditions, your answer may well be this week’s astronomical highlight.
Mu Cephei, also known as Herschel’s Garnet Star, is a ruddy gem in the constellation Cepheus near the Cygnus/Lacerta border. A variable star ranging in brightness by a factor of about three-fold from magnitudes 5.0 to 3.7, Mu Cephei is low to the northeast for mid-northern latitude observers in July at dusk, and will be progressively higher as summer wears on. Continue reading “Seeing Red: Hunting Herschel’s Garnet Star”
When discovered on August 24, 2011, supernova 2011fe was the closest supernova since the famous SN 1987A. Located in the relatively nearby Pinwheel galaxy (M101), it was a prime target for scientists to study since the host galaxy has been well studied and many high resolution images exist from before the explosion, allowing astronomers to search them for information on the star that led to the eruption. But when astronomers, led by Weidong Li, at the University of California, Berkeley searched, what they found defied the typically accepted explanations for supernovae of the same type as 2011fe.
SN 2011fe was a type 1a supernova. This class of supernova is expected to be caused by a white dwarf which accumulates mass contributed by a companion star. The general expectation is that the companion star is a star evolving off the main sequence. As it does, it swells up, and matter spills onto the white dwarf. If this pushes the dwarf’s mass over the limit of 1.4 times the mass of the Sun, the star can no longer support the weight and it undergoes a runaway collapse and rebound, resulting in a supernova.
Fortunately, the swollen up stars, known as red giants, become exceptionally bright due to their large surface area. The eighth brightest star in our own sky, Betelgeuse, is one of these red giants. This high brightness means that these objects are visible from large distances, potentially even in galaxies as distant as the Pinwheel. If so, the astronomers from Berkeley would be able to search archival images and detect the brighter red giant to study the system prior to the explosion.
But when the team searched the images from the Hubble Space Telescope which had snapped pictures through eight different filters, no star was visible at the location of the supernova. This finding follows a quick report from September which announced the same results, but with a much lower threshold for detection. The team followed up by searching images from the Spitzer infrared telescope which also failed to find any source at the proper location.
While this doesn’t rule out the presence of the contributing star, it does place constraints on its properties. The limit on brightness means that the contributor star could not have been a luminous red giant. Instead, the result favors another model of mass donation known as a double-degenerate model
In this scenario, two white dwarfs (both supported by degenerate electrons) orbit one another in a tight orbit. Due to relativistic effects, the system will slowly lose energy and eventually the two stars will become close enough that one will become disrupted enough to spill mass onto the other. If this mass transfer pushes the primary over the 1.4 solar mass limit, it would trigger the same sort of explosion.
This double degenerate model does not exclusively rule out the possibility of red giants contributing to type Ia supernovae, but recently other evidence has revealed missing red giants in other cases.
While planets orbiting twin stars are a staple of science fiction, another is having humans live on planets orbiting red giant stars. The majority of the story of Planet of the Apes takes place on a planet around Betelgeuse. Planets around Arcturus in Isaac Asimov’s Foundation series make up the capital of his Sirius Sector. Superman’s home planet was said to orbit a the fictional red giant, Rao. Races on these planets are often depicted as being old and wise since their stars are aged, and nearing the end of their lives. But is it really plausible to have such planets?
Stars don’t last forever. Our own Sun has an expiration date in about 5 billion years. At that time, the amount of hydrogen fuel in the core of the Sun will have run out. Currently, the fusion of that hydrogen into helium is giving rise to a pressure which keeps the star from collapsing in on itself due to gravity. But, when it runs out, that support mechanism will be gone and the Sun will start to shrink. This shrinking causes the star to heat up again, increasing the temperature until a shell of hydrogen around the now exhausted core becomes hot enough to take up the job of the core and begins fusing hydrogen to helium. This new energy source pushes the outer layers of the star back out causing it to swell to thousands of times its previous size. Meanwhile, the hotter temperature to ignite this form of fusion will mean that the star will give off 1,000 to 10,000 times as much light overall, but since this energy is spread out over such a large surface area, the star will appear red, hence the name.
So this is a red giant: A dying star that is swollen up and very bright.
Now to take a look at the other half of the equation, namely, what determines the habitability of a planet? Since these sci-fi stories inevitably have humans walking around on the surface, there’s some pretty strict criteria this will have to follow.
First off, the temperature must be not to hot and not to cold. In other words, the planet must be in the Habitable zone also known as the “Goldilocks zone”. This is generally a pretty good sized swath of celestial real estate. In our own solar system, it extends from roughly the orbit of Venus to the orbit of Mars. But what makes Mars and Venus inhospitable and Earth relatively cozy is our atmosphere. Unlike Mars, it’s thick enough to keep much of the heat we receive from the sun, but not too much of it like Venus.
The atmosphere is crucial in other ways too. Obviously it’s what the intrepid explorers are going to be breathing. If there’s too much CO2, it’s not only going to trap too much heat, but make it hard to breathe. Also, CO2 doesn’t block UV light from the Sun and cancer rates would go up. So we need an oxygen rich atmosphere, but not too oxygen rich or there won’t be enough greenhouse gasses to keep the planet warm.
The problem here is that oxygen rich atmospheres just don’t exist without some assistance. Oxygen is actually very reactive. It likes to form bonds, making it unavailable to be free in the atmosphere like we want. It forms things like H2O, CO2, oxides, etc… This is why Mars and Venus have virtually no free oxygen in their atmospheres. What little they do comes from UV light striking the atmosphere and causing the bonded forms to disassociate, temporarily freeing the oxygen.
Earth only has as much free oxygen as it does because of photosynthesis. This gives us another criteria that we’ll need to determine habitability: the ability to produce photosynthesis.
So let’s start putting this all together.
Firstly, the evolution of the star as it leaves the main sequence, swelling up as it becomes a red giant and getting brighter and hotter will mean that the “Goldilocks zone” will be sweeping outwards. Planets that were formerly habitable like the Earth will be roasted if they aren’t entirely swallowed by the Sun as it grows. Instead, the habitable zone will be further out, more where Jupiter is now.
However, even if a planet were in this new habitable zone, this doesn’t mean its habitable under the condition that it also have an oxygen rich atmosphere. For that, we need to convert the atmosphere from an oxygen starved one, to an oxygen rich one via photosynthesis.
So the question is how quickly can this occur? Too slow and the habitable zone may have already swept by or the star may have run out of hydrogen in the shell and started contracting again only to ignite helium fusion in the core, once again freezing the planet.
The only example we have so far is on our own planet. For the first three billion years of life, there was little free oxygen until photosynthetic organisms arose and started converting it to levels near that of today. However, this process took several hundred million years. While this could probably be increased by an order of magnitude to tens of millions of years with genetically engineered bacteria seeded on the planet, we still need to make sure the timescales will work out.
It turns out the timescales will be different for different masses of stars. More massive stars burn through their fuel faster and will thus be shorter. For stars like the Sun, the red giant phase can last about 1.5 billion years, so ~100x longer than is necessary to develop an oxygen rich atmosphere. For stars twice as massive as the Sun, that timescale drops to a mere 40 million years, approaching the lower limit of what we’ll need. More massive stars will evolve even more quickly. So for this to be plausible, we’ll need lower mass stars that evolve slower. A rough upper limit here would be a two solar mass star.
However, there’s one more effect we need to worry about: Can we have enough CO2 in the atmosphere to even have photosynthesis? While not nearly as reactive as oxygen, carbon dioxide is also subject to being removed from the atmosphere. This is due to effects like silicate weathering such as CO2 + CaSiO3 –> CaCO3 + SiO2. While these effects are slow they build up with geological timescales. This means we can’t have old planets since they would have had all their free CO2 locked away into the surface. This balance was explored in a paper published in 2009 and determined that, for an Earth mass planet, the free CO2 would be exhausted long before the parent star even reached the red giant phase!
So we’re required to have low mass stars that evolve slowly to have enough time to develop the right atmosphere, but if they evolve that slowly, then there’s not enough CO2 left to get the atmosphere anyway! We’re stuck with a real Catch 22. The only way to make this feasible again is to find a way to introduce sufficient amounts of new CO2 into the atmosphere just as the habitable zone starts sweeping by.
Fortunately, there are some pretty large repositories of CO2 just flying around! Comets are composed mostly of frozen carbon monoxide and carbon dioxide. Crashing a few of them into a planet would introduce sufficient CO2 to potentially get photosynthesis started (once the dust settled down). Do that a few hundred thousand years before the planet would enter the habitable zone, wait ten million years, and then the planet could potentially be habitable for as much as an additional billion years more.
Ultimately this scenario would be plausible, but not exactly a good personal investment since you’d be dead long before you’d be able to reap the benefits. A long term strategy for the survival of a space faring species perhaps, but not a quick fix to toss down colonies and outposts.
While science education often focuses on teaching the scientific method (or at least tries to), the real process of science is often far less linear. Theories tie together so many points of data, that making singular predictions that confirm or refute a proposition is often challenging. Such is the case for stellar evolution. The understanding is woven together from so many independent pieces, that the process is more of a roaring sea than a directed river.
Realizing this, I’ve been keen on instances in which necessary predictions are observationally confirmed later. A new study, led by Mariela Vieytes from the University of Buenos Aires and accepted in an upcoming publication of Astronomy & Astrophysics, does just that by demonstrating one of the necessary conditions for predictions of post main sequence evolution. Specifically, astronomers need to establish that stars undergo significant amounts of mass loss (~0.1-0.3 M☉) during their red giant branch evolution. This requirement was set forth as part of the expected behavior necessary to explain: “i) the very existence of the horizontal branch (HB) and its morphology, ii) the pulsational properties of RR Lyrae stars, iii) the absence of asymptotic giant branch (AGB) stars brighter than the red giant branch (RGB) tip, and the chemistry and characteristics in the AGB, post-AGB and planetary nebula evolutionary phases, iv) the mass of white dwarf (WD) stars.”
Astronomers expected to find confirmation of this mass loss by detecting gas congregating in the cores of globular clusters after being shed by stars evolving along the RGB. Yet searches for this gas came up mostly empty. Eventually astronomers realized that gas would be stripped relatively quickly as globular clusters plunged through the galactic plane. But this left them with the need to confirm the prediction in some other manner.
One way to do this is to look at the stars themselves. If they show velocities in their photospheres greater than the escape velocity, they will lose mass. Just how much higher will determine the amount of mass lost. By analyzing the Doppler shift of specific absorption lines of several stars in the cluster ω Centauri, the team was able to match the amount of mass being lost to predictions from evolutionary models. From this, the team concluded that their target stars were losing between the rates of mass loss are estimated as a few 10-9 and 10-10 M☉ yr-1. This is in general agreement with the predictions set forth by evolutionary models.
A newly discovered red giant star is a relic from the early universe — a star that may have been among the second generation of stars to form after the Big Bang. Located in the dwarf galaxy Sculptor some 290,000 light-years away, the star has a remarkably similar chemical make-up to the Milky Way’s oldest stars. Its presence supports the theory that our galaxy underwent a “cannibal” phase, growing to its current size by swallowing dwarf galaxies and other galactic building blocks.
“This star likely is almost as old as the universe itself,” said astronomer Anna Frebel of the Harvard-Smithsonian Center for Astrophysics, lead author of the Nature paper reporting the finding.
Dwarf galaxies are small galaxies with just a few billion stars, compared to hundreds of billions in the Milky Way. In the “bottom-up model” of galaxy formation, large galaxies attained their size over
billions of years by absorbing their smaller neighbors.
“If you watched a time-lapse movie of our galaxy, you would see a swarm of dwarf galaxies buzzing around it like bees around a beehive,” explained Frebel. “Over time, those galaxies smashed together and mingled their stars to make one large galaxy — the Milky Way.”
If dwarf galaxies are indeed the building blocks of larger galaxies, then the same kinds of stars should be found in both kinds of galaxies, especially in the case of old, “metal-poor” stars. To astronomers, “metals” are chemical elements heavier than hydrogen or helium. Because they are products of stellar evolution, metals were rare in the early Universe, and so old stars tend to be metal-poor.
Old stars in the Milky Way’s halo can be extremely metal-poor, with metal abundances 100,000 times poorer than in the Sun, which is a typical younger, metal-rich star. Surveys over the past decade have
failed to turn up any such extremely metal-poor stars in dwarf galaxies, however.
“The Milky Way seemed to have stars that were much more primitive than any of the stars in any of the dwarf galaxies,” says co-author Josh Simon of the Observatories of the Carnegie Institution. “If dwarf
galaxies were the original components of the Milky Way, then it’s hard to understand why they wouldn’t have similar stars.”
The team suspected that the methods used to find metal-poor stars in dwarf galaxies were biased in a way that caused the surveys to miss the most metal-poor stars. Team member Evan Kirby, a Caltech
astronomer, developed a method to estimate the metal abundances of large numbers of stars at a time, making it possible to efficiently search for the most metal-poor stars in dwarf galaxies.
“This was harder than finding a needle in a haystack. We needed to find a needle in a stack of needles,” said Kirby. “We sorted through hundreds of candidates to find our target.”
Among stars he found in the Sculptor dwarf galaxy was one faint, 18th-magnitude speck designated S1020549. Spectroscopic measurements of the star’s light with Carnegie’s Magellan-Clay telescope in Las Campanas, Chile, determined it to have a metal abundance 6,000 times lower than that of the Sun; this is five times lower than any other star found so far in a dwarf galaxy.
The researchers measured S1020549’s total metal abundance from elements such as magnesium, calcium, titanium, and iron. The overall abundance pattern resembles those of old Milky Way stars, lending the first observational support to the idea that these galactic stars originally formed in dwarf galaxies.
The researchers expect that further searches will discover additional metal-poor stars in dwarf galaxies, although the distance and faintness of the stars pose a challenge for current optical telescopes. The next generation of extremely large optical telescopes, such as the proposed 24.5-meter Giant Magellan Telescope, equipped with high-resolution spectrographs, will open up a new window for studying the growth of galaxies through the chemistries of their stars.
In the meantime, says Simon, the extremely low metal abundance in S1020549 study marks a significant step towards understanding how our galaxy was assembled. “The original idea that the halo of the Milky
Way was formed by destroying a lot of dwarf galaxies does indeed appear to be correct.”
Like everything else in the Universe, stars get old. As they become older, stars like our own Sun “puff up”, becoming red giants for a period before finally settling down into white dwarfs. During this late period of their stellar lives, about 30% of low-mass red giants exhibit a curious variability in their brightness that remains unexplained to this day. A new survey of these types of red giants rules out most of the current explanations put forth, making it necessary to find a new theory for their behavior.
Red giants are a stage in the later part of a Sun-like star’s life when most of the fuel powering nuclear fusion in the core of the star is exhausted. The resulting lack of light pressure pushing out against the force of gravity causes the star to collapse in on itself. When this collapse occurs, though, it heats up a shell of hydrogen around the core enough to reignite fusion, resulting in an increase in nuclear fusion that causes the star to become bigger due to the increased light pressure. This can result in the star becoming 1,000 to 10,000 times more luminous.
Variability in the light output of red giants is natural -they swell up and shrink down in a consistent pattern, resulting in brighter and dimmer light outputs. There is, however, a difference in the brightness of roughly a third to one half of these stars that happens over longer time periods, to the tune of up to five years.
Called the Long Secondary Period (LSP), the changing brightness of the star happens over longer timescales than the shorter period pulsation. It is this long-term variation in brightness that remains unexplained.
A new detailed study of 58 variable red giants in the Large Magellanic cloud by Peter Wood and Christine Nicholls, both of the Research School of Astronomy and Astrophysics at the Australian National University, shows that the proposed explanations of this mysterious variability fall short of the measured properties of the stars. Nicholls and Wood used the FLAMES/GIRAFFE spectrograph on ESO’s Very Large Telescope, and combined the information with data from other telescopes like the Spitzer Space Telescope.
There are two leading explanations of the phenomenon: the presence of a companion object to the red giants that orbit in such a way to change their brightness, or the presence of a circumstellar dust cloud that somehow blocks the light coming from the star in our direction on a periodic scale.
A binary companion to the stars would change their orbit in such a way that they would approach and recede from the vantage point of the Earth, and if the companion passed in front of the star it would also dim the light streaming from the red giant. In the case of a binary companion, the spectra of the brightness change among all of these stars is relatively similar, meaning that for this explanation to work, all of the red giants exhibiting the LSP variation would have to have a companion of a similar size, approximately 0.09 times the mass of the Sun. This scenario would be extremely unlikely, given the large number of stars that show this brightness variation.
The effect of a circumstellar dust cloud could be a possible explanation. A cloud of circumstellar dust that obscures the light from the star once per orbit would dim its light enough to explain the phenomenon. The presence of such a dust cloud would be revealed by an excess of light coming from the star in the mid-infrared spectrum. The dust would absorb light from the star, and re-emit it in the form of light in the mid-infrared region of the spectrum.
Observations of LSP stars show the mid-infrared signature that’s a telltale sign of dust, but the correlation between the two doesn’t mean that the dust is causing the brightness variation. It could be that the dust is a byproduct of ejected mass from the star itself, the underlying cause of which could be associated with the change in brightness.
Whatever the cause of the oscillation of brightness in these red giants may be, it does make them eject mass in large clumps or in the form of an expanding disc. Obviously, further observations will be necessary to track down the reason for this phenomenon.
Source: ESO, Arxiv papers