Journal Club: On Nothing

Today's Journal Club is about a new addition to the Standard Model of fundamental particles.

[/caption]

According to Wikipedia, a journal club is a group of individuals who meet regularly to critically evaluate recent articles in scientific literature. Being Universe Today if we occasionally stray into critically evaluating each other’s critical evaluations, that’s OK too. And of course, the first rule of Journal Club is… don’t talk about Journal Club.

So, without further ado – today’s journal article under the spotlight is about nothing.

The premise of the article is that to define nothing we need to look beyond a simple vacuum and think of nothing in terms of what there was before the Big Bang – i.e. really nothing.

For example, you can have a bubble of nothing (no topology, no geometry), a bubble of next to nothing (topology, but no geometry) or a bubble of something (which has topology, geometry and most importantly volume). The universe is a good example of a bubble of something.

The paper walks the reader through a train of logic which ends by defining nothing as ‘anti De Sitter space as the curvature length approaches zero’. De Sitter space is essentially a ‘vacuum solution’ of Einstein’s field equations – that is, a mathematically modelled universe with a positive cosmological constant. So it expands at an accelerating rate even though it is an empty vacuum. Anti De Sitter space is a vacuum solution with a negative cosmological constant – so it’s shrinking inward even though it is an empty vacuum. And as its curvature length approaches zero, you get nothing.

Having so defined nothing, the authors then explore how you might get a universe to spontaneously arise from that nothing – and nope, apparently it can’t be done. Although there are various ways to enable ‘tunnelling’ that can produce quantum fluctuations within an apparent vacuum – you can’t ‘up-tunnel’ from nothing (or at least you can’t up-tunnel from ‘anti-de Sitter space as the curvature length approaches zero’ ).

The paper acknowledges this is obviously a problem, since here we are. By explanation, the authors suggest:

  • get past the problem by appealing to immeasurable extra dimensions (a common strategy in theoretical physics to explain impossible things without anyone being able to easily prove or disprove it);
  • that their definition of nothing is just plain wrong; or
  • that they (and we) are just not asking the right questions.

Clearly the third explanation is the authors’ favoured one as they end with the statement: ‘One thing seems clear… to truly understand everything, we must first understand nothing‘. Nice.

So – comments? Is appealing to extra dimensions just a way of dodging a need for evidence? Nothing to declare? Want to suggest an article for the next edition of Journal Club?

Today’s article:
Brown and Dahlen On Nothing.

Unlocking Cosmology With Type 1a Supernovae

New research shows that some old stars known as white dwarfs might be held up by their rapid spins, and when they slow down, they explode as Type Ia supernovae. Thousands of these "time bombs" could be scattered throughout our Galaxy. In this artist's conception, a supernova explosion is about to obliterate an orbiting Saturn-like planet. Credit: David A. Aguilar (CfA)

[/caption]Let’s face it, cosmologists catch a lot of flack. It’s easy to see why. These are people who routinely publish papers that claim to ever more finely constrain the size of the visible Universe, the rate of its breakneck expansion, and the distance to galaxies that lie closer and closer to the edges of both time and space. Many skeptics scoff at scientists who seem to draw such grand conclusions without being able to directly measure the unbelievable cosmic distances involved. Well, it turns out cosmologists are a creative bunch. Enter our star (ha, ha): the Type 1a Supernova. These stellar fireballs are one of the main tools astronomers use in order to make such fantastic discoveries about our Universe. But how exactly do they do it?

First, let’s talk physics. Type 1a supernovae result from a mismatched marriage gone wrong. When a red giant and white dwarf (or, less commonly, two white dwarfs) become trapped in a gravitational standoff, the denser dwarf star begins to accrete material from its bloated companion. Eventually the white dwarf reaches a critical mass (about 1.4 times that of our own Sun) and the natural pressure exerted by its core can no longer support its weight. A runaway nuclear reaction occurs, resulting in a cataclysmic explosion so large, it can be seen billions of light years away. Since type 1a supernovae always result from the collapse of a white dwarf, and since the white dwarf always becomes unstable at exactly the same mass, astronomers can easily work out the precise luminosity of such an event. And they have. This is great news, because it means that type 1a supernovae can be used as so-called standard candles with which to probe distances in the Universe. After all, if you know how bright something is and you know how bright it appears from where you are, you can easily figure out how far away it must be.

A Type Ia supernova occurs when a white dwarf accretes material from a companion star until it exceeds the Chandrasekhar limit and explodes. By studying these exploding stars, astronomers can measure dark energy and the expansion of the universe. CfA scientists have found a way to correct for small variations in the appearance of these supernovae, so that they become even better standard candles. The key is to sort the supernovae based on their color. Credit: NASA/CXC/M. Weiss

Now here’s where cosmology comes in. Photons naturally lose energy as they travel across the expanding Universe, so the light astronomers observe coming from type 1a supernovae will always be redshifted. The magnitude of that redshift depends on the amount of dark energy that is causing the Universe to expand. It also means that the apparent brightness of a supernova (that is, how bright it looks from Earth) can be monitored to determine how quickly it is receding from our line of view. Observations of the night sky will always be a function of a specific cosmology; but because their distances can be so easily calculated, type 1a supernovae actually allow astronomers to draw a physical map of the expansion of the Universe.

Spotting a type 1a supernova in its early, explosive throes is a rare event; after all, the Universe is a pretty big place. But when it does happen, it offers observers an unparalleled opportunity to dissect the chaos that leads to such a massive explosion. Sometimes astronomers are even lucky enough to catch one right in our cosmic backyard, a feat that occurred last August when Caltech’s Palomar Transit Factory (PTF) detected a type 1a supernova in M101, a galaxy just 25 million light years away. By the way, it isn’t just professionals that got to have all the fun! Amateur and career astronomers alike were able to use this supernova (the romantically named PTF11kly) to probe the inner workings of these precious standard candles. Want to learn more about how you can get in on the action the next time around? Check out UT’s podcast, Getting Started in Amateur Astronomy for more information.

Guest Post: The Cosmic Energy Inventory

The Cosmic Energy Inventory chart by Markus Pössel. Click for larger version.

[/caption]

Now that the old year has drawn to a close, it’s traditional to take stock. And why not think big and take stock of everything there is?

Let’s base our inventory on energy. And as Einstein taught us that energy and mass are equivalent, that means automatically taking stock of all the mass that’s in the universe, as well – including all the different forms of matter we might be interested in.

Of course, since the universe might well be infinite in size, we can’t simply add up all the energy. What we’ll do instead is look at fractions: How much of the energy in the universe is in the form of planets? How much is in the form of stars? How much is plasma, or dark matter, or dark energy?


The chart above is a fairly detailed inventory of our universe. The numbers I’ve used are from the article The Cosmic Energy Inventory by Masataka Fukugita and Jim Peebles, published in 2004 in the Astrophysical Journal (vol. 616, p. 643ff.). The chart style is borrowed from Randall Munroe’s Radiation Dose Chart over at xkcd.

These fractions will have changed a lot over time, of course. Around 13.7 billion years ago, in the Big Bang phase, there would have been no stars at all. And the number of, say, neutron stars or stellar black holes will have grown continuously as more and more massive stars have ended their lives, producing these kinds of stellar remnants. For this chart, following Fukugita and Peebles, we’ll look at the present era. What is the current distribution of energy in the universe? Unsurprisingly, the values given in that article come with different uncertainties – after all, the authors are extrapolating to a pretty grand scale! The details can be found in Fukugita & Peebles’ article; for us, their most important conclusion is that the observational data and their theoretical bases are now indeed firm enough for an approximate, but differentiated and consistent picture of the cosmic inventory to emerge.

Let’s start with what’s closest to our own home. How much of the energy (equivalently, mass) is in the form of planets? As it turns out: not a lot. Based on extrapolations from what data we have about exoplanets (that is, planets orbiting stars other than the sun), just one part-per-million (1 ppm) of all energy is in the form of planets; in scientific notation: 10-6. Let’s take “1 ppm” as the basic unit for our first chart, and represent it by a small light-green square. (Fractions of 1 ppm will be represented by partially filled such squares.) Here is the first box (of three), listing planets and other contributions of about the same order of magnitude:

So what else is in that box? Other forms of condensed matter, mainly cosmic dust, account for 2.5 ppm, according to rough extrapolations based on observations within our home galaxy, the Milky Way. Among other things, this is the raw material for future planets!

For the next contribution, a jump in scale. To the best of our knowledge, pretty much every galaxy contains a supermassive black hole (SMBH) in its central region. Masses for these SMBHs vary between a hundred thousand times the mass of our Sun and several billion solar masses. Matter falling into such a black hole (and getting caught up, intermittently, in super-hot accretion disks swirling around the SMBHs) is responsible for some of the brightest phenomena in the universe: active galaxies, including ultra high-powered quasars. The contribution of matter caught up in SMBHs to our energy inventory is rather modest, though: about 4 ppm; possibly a bit more.

Who else is playing in the same league? The sum total of all electromagnetic radiation produced by stars and by active galaxies (to name the two most important sources) over the course of the last billions of years, to name one: 2 ppm. Also, neutrinos produced during supernova explosions (at the end of the life of massive stars), or in the formation of white dwarfs (remnants of lower-mass stars like our Sun), or simply as part of the ordinary fusion processes that power ordinary stars: 3.2 ppm all in all.

Then, there’s binding energy: If two components are bound together, you will need to invest energy in order to separate them. That’s why binding energy is negative – it’s an energy deficit you will need to overcome to pry the system’s components apart. Nuclear binding energy, from stars fusing together light elements to form heavier ones, accounts for -6.3 ppm in the present universe – and the total gravitational binding energy accumulated as stars, galaxies, galaxy clusters, other gravitationally bound objects and the large-scale structure of the universe have formed over the past 14 or so billion years, for an even larger -13.4 ppm. All in all, the negative contributions from binding energy more than cancel out all the positive contributions by planets, radiation, neutrinos etc. we’ve listed so far.

Which brings us to the next level. In order to visualize larger contributions, we need a change scale. In box 2, one square will represent a fraction of 1/20,000 or 0.00005. Put differently: Fifty of the little squares in the first box correspond to a single square in the second box:

So here, without further ado, is box 2 (including, in the upper right corner, a scale model of the first box):

Now we are in the realm of stars and related objects. By measuring the luminosity of galaxies, and using standard relations between the masses and luminosity of stars (“mass-to-light-ratio”), you can get a first estimate for the total mass (equivalently: energy) contained in stars. You’ll also need to use the empirical relation (“initial mass function”) for how this mass is distributed, though: How many massive stars should there be? How many lower-mass stars? Since different stars have different lifetimes (live massively, die young), this gives estimates for how many stars out there are still in the prime of life (“main sequence stars”) and how many have already died, leaving white dwarfs (from low-mass stars), neutron stars (from more massive stars) or stellar black holes (from even more massive stars) behind. The mass distribution also provides you with an estimate of how much mass there is in substellar objects such as brown dwarfs – objects which never had sufficient mass to make it to stardom in the first place.

Let’s start small with the neutron stars at 0.00005 (1 square, at our current scale) and the stellar black holes (0.00007). Interestingly, those are outweighed by brown dwarfs which, individually, have much less mass, but of which there are, apparently, really a lot (0.00014; this is typical of stellar mass distribution – lots of low-mass stars, much fewer massive ones.) Next come white dwarfs as the remnants of lower-mass stars like our Sun (0.00036). And then, much more than all the remnants or substellar objects combined, ordinary, main sequence stars like our Sun and its higher-mass and (mostly) lower-mass brethren (0.00205).

Interestingly enough, in this box, stars and related objects contribute about as much mass (or energy) as more undifferentiated types of matter: molecular gas (mostly hydrogen molecules, at 0.00016), hydrogen and helium atoms (HI and HeI, 0.00062) and, most notably, the plasma that fills the void between galaxies in large clusters (0.0018) add up to a whopping 0.00258. Stars, brown dwarfs and remnants add up to 0.00267.

Further contributions with about the same order of magnitude are survivors from our universe’s most distant past: The cosmic background radiation (CMB), remnant of the extremely hot radiation interacting with equally hot plasma in the big bang phase, contributes 0.00005; the lesser-known cosmic neutrino background, another remnant of that early equilibrium, contributes a remarkable 0.0013. The binding energy from the first primordial fusion events (formation of light elements within those famous “first three minutes”) gives another contribution in this range: -0.00008.

While, in the previous box, the matter we love, know and need was not dominant, it at least made a dent. This changes when we move on to box 3. In this box, one square corresponds to 0.005. In other words: 100 squares from box 2 add up to a single square in box 3:

Box 3 is the last box of our chart. Again, a scale model of box 2 is added for comparison: All that’s in box 2 corresponds to one-square-and-a-bit in box 3.

The first new contribution: warm intergalactic plasma. Its presence is deduced from the overall amount of ordinary matter (which follows from measurements of the cosmic background radiation, combined with data from surveys and measurements of the abundances of light elements) as compared with the ordinary matter that has actually been detected (as plasma, stars, e.g.). From models of large-scale structure formation, it follows that this missing matter should come in the shape (non-shape?) of a diffuse plasma, which isn’t dense (or hot) enough to allow for direct detection. This cosmic filler substance amounts to 0.04, or 85% of ordinary matter, showing just how much of a fringe phenomena those astronomical objects we usually hear and read about really are.

The final two (dominant) contributions come as no surprise for anyone keeping up with basic cosmology: dark matter at 23% is, according to simulations, the backbone of cosmic large-scale structure, with ordinary matter no more than icing on the cake. Last but not least, there’s dark energy with its contribution of 72%, responsible both for the cosmos’ accelerated expansion and for the 2011 physics Nobel Prize.

Minority inhabitants of a part-per-million type of object made of non-standard cosmic matter – that’s us. But at the same time, we are a species, that, its cosmic fringe position notwithstanding, has made remarkable strides in unravelling the big picture – including the cosmic inventory represented in this chart.

__________________________________________

Here is the full chart for you to download: the PNG version (1200×900 px, 233 kB) or the lovingly hand-crafted SVG version (29 kB).

The chart “The Cosmic Energy Inventory” is licensed under Creative Commons BY-NC-SA 3.0. In short: You’re free to use it non-commercially; you must add the proper credit line “Markus Pössel [www.haus-der-astronomie.de]”; if you adapt the work, the result must be available under this or a similar license.

Technical notes: As is common in astrophysics, Fukugita and Peebles give densities as fractions of the so-called critical density; in the usual cosmological models, that density, evaluated at any given time (in this case: the present), is critical for determining the geometry of the universe. Using very precise measurements of the cosmic background radiation, we know that the average density of the universe is indistinguishable from the critical density. For simplicity’s sake, I’m skipping this detour in the main text and quoting all of F & P’s numbers as “fractions of the universe’s total energy (density)”.

For the supermassive black hole contributions, I’ve neglected the fraction ?n in F & P’s article; that’s why I’m quoting a lower limit only. The real number could theoretically be twice the quoted value; it’s apparently more likely to be close to the value given here, though. For my gravitational binding energy, I’ve added F & P’s primeval gravitational binding energy (no. 4 in their list) and their binding energy from dissipative gravitational settling (no. 5).

The fact that the content of box 3 adds up not quite to 1, but to 0.997, is an artefact of rounding not quite consistently when going from box 2 to box 3. I wanted to keep the sum of all that’s in box 2 at the precision level of that box.

What if the Earth had Two Moons?

The Earth and Moon as seen from Mariner 10 en route to Venus. This could be a similar view of two moons as seen from Earth. Image credit: NASA/courtesy of nasaimages.org

The idea of an Earth with two moons has been a science fiction staple for decades. More recently, real possibilities of an Earth with two moons have popped up. The properties of the Moon’s far side has many scientists thinking that another moon used to orbit the Earth before smashing into the Moon and becoming part of its mass. Since 2006, astronomers have been tracking smaller secondary moons that our own Earth-Moon system captures; these metre-wide moons stay for a few months then leave.

But what if the Earth actually had a second permanent moon today? How different would life be? Astronomer and physicist Neil F. Comins delves into this thought experiment, and suggests some very interesting consequences. 

This shot of Io orbiting Jupiter shows the scale between other moons and their planet. Image credit:NASA/courtesy of nasaimages.org

Our Earth-Moon system is unique in the solar system. The Moon is 1/81 the mass of Earth while most moons are only about 3/10,000 the mass of their planet. The size of the Moon is a major contributing factor to complex life on Earth. It is responsible for the high tides that stirred up the primordial soup of the early Earth, it’s the reason our day is 24 hours long, it gives light for the variety of life forms that live and hunt during the night, and it keeps our planet’s axis tilted at the same angle to give us a constant cycle of seasons.

A second moon would change that.

For his two-mooned Earth thought experiment, Comins proposes that our Earth-Moon system formed as it did — he needs the same early conditions that allowed life to form — before capturing a third body. This moon, which I will call Luna, sits halfway between the Earth and the Moon.

Luna’s arrival would wreak havoc on Earth. Its gravity would tug on the planet causing absolutely massive tsunamis, earthquakes, and increased volcanic activity. The ash and chemicals raining down would cause a mass extinction on Earth.

But after a few weeks, things would start to settle.

Luna would adjust to its new position between the Earth and the Moon. The pull from both bodies would cause land tides and volcanic activity on the new moon; it would develop activity akin to Jupiter’s volcanic moon Io. The constant volcanic activity would make Luna smooth and uniform, as well as a beautiful fixture in the night sky.

New Horizons captured this image of volcanic activity on Io. The same sight could be seen of Luna from Earth. Image credit: NASA/courtesy of nasaimages.org

The Earth would also adjust to its two moons, giving life a chance to arise. But life on a two-mooned Earth would be different.

The combined light from the Moon and Luna would make for much brighter nights, and their different orbital periods will mean the Earth would have fewer fully dark nights. This will lead to different kinds of nocturnal beings; nighttime hunters would have an easier time seeing their prey, but the prey would develop better camouflage mechanisms. The need to survive could lead to more cunning and intelligent breeds of nocturnal animals.

Humans would have to adapt to the challenges of this two-mooned Earth. The higher tides created by Luna would make shoreline living almost impossible — the difference between high and low tides would be measured in thousands of feet. Proximity to the water is a necessity for sewage draining and transport of goods, but with higher tides and stronger erosion, humans would have to develop different ways of using the oceans for transfer and travel. The habitable area of Earth, then, would be much smaller.

The measurement of time would also be different. Our months would be irrelevant. Instead, a system of full and partials months would be necessary to account for the movement of two moons.

A scale comparison of the Earth, the Moon, and Jupiter’s largest moons (the Jovian moons). Image credit:Image Credit: NASA/courtesy of nasaimages.org

Eventually, the Moon and Luna would collide; like the Moon is now, both moons would be receding from Earth. Their eventual collision would send debris raining through Earth’s atmosphere and lead to another mass extinction. The end result would be one moon orbiting the Earth, and life another era of life would be primed to start.

Source: Neil Comins’ What if the Earth had Two Moons? And Nine Other Thought Provoking Speculations on the Solar System.

Why Do We Live in Three Dimensions?

The puzzling universe. Image credit: NASA/courtesy of nasaimages.org

[/caption]

Day to day life has made us all comfortable with 3 dimensions; we constantly interact with objects that have height, width, and depth. But why our universe has three spatial dimensions has been a problem for physicists, especially since the 3-dimensional universe isn’t easily explained within superstring theory or Big Bang cosmology. Recently, three researchers have come up with an explanation.  

The history of the universe starting the with the Big Bang. Image credit: grandunificationtheory.com

Most astronomers subscribe to Big Bang cosmology, the model that proposes that the universe was born from the explosion of an infinitely tiny point. The theory is supported by observations of the cosmic microwave background and the abundance of certain naturally occurring elements. But Big Bang cosmology is at odds with Einstein’s theory of general relativity – general relativity doesn’t allow for any situation in which the whole universe is one tiny point, which means this theory alone can’t explain the origin of the universe.

The incompatibility between general relativity and Big Bang cosmology has stumped cosmologists. But almost 40 years ago, superstring theory arose as a possible unifying theory of everything.

A visualization of strings. Image credit: R. Dijkgraaf.

Superstring theory suggests that the four fundamental interactions among elementary particles – electromagnetic force, weak interaction, strong interaction, and gravity – are represented as various oscillation modes of very tiny strings. Because gravity is one of the fundamental forces, superstring theory includes an explanation of general relativity. The problem is, superstring theory predicts that there are 10 dimensions – 9 spatial and one temporal. How does this work with our 3 dimensional universe?

Superstring theory has remained little more than a theory for years. Investigations have been restricted to discussing models and scenarios since performing the actual calculations have been incredibly difficult. As such, superstring theory’s validity and usefulness have remained unclear.

But a group of three researchers, associate professor at KEK Jun Nishimura, associate professor at Shizuoka University Asato Tsuchiya, and project researcher at Osaka University Sang-Woo Kim, has succeeded in generating a model of the universe’s birth based on superstring theory.

Using a supercomputer, they found that at the moment of the Big Bang, the universe had 10 dimensions – 9 spatial and 1 temporal – but only 3 of these spatial dimensions expanded.

This "baby picture" of the universe shows tiny variations in the microwave background radiation temperature. Hot spots show as red, cold spots as dark blue.Credit: NASA/WMAP Science Team

The team developed a method for calculating matrices that represent the interactions of strings. They used these matrices to calculate how 9 dimensional space changes over time. As they moved further back in time, they found that space is extended in 9 directions, but at one point only 3 directions start to expand rapidly.

In short, the 3 dimensional space that we live in can result from the 9 original spatial dimensions string theory predicts.

This result is only part of the solution to the space-time dimensionality puzzle, but it strongly supports the validity of superstring theory. It’s possible, though, that this new method of analyzing superstring theory with supercomputers will lead to its application towards solving other cosmological questions.

 Source: The mechanism that explains why our universe was born with 3 dimensions.

A Star-Making Blob from the Cosmic Dawn

This image shows one of the most distant galaxies known, called GN-108036, dating back to 750 million years after the Big Bang that created our universe. Credit: NASA, ESA, JPL-Caltech, STScI, and the University of Tokyo

[/caption]

Looking back in time with some of our best telescopes, astronomers have found one of the most distant and oldest galaxies. The big surprise about this blob-shaped galaxy, named GN-108036, is how exceptionally bright it is, even though its light has taken 12.9 billion years to reach us. This means that back in its heyday – which astronomers estimate at about 750 million years after the Big Bang — it was generating an exceptionally large amount of stars in the “cosmic dawn,” the early days of the Universe.

“The high rate of star formation found for GN-108036 implies that it was rapidly building up its mass some 750 million years after the Big Bang, when the Universe was only about five percent of its present age,” said Bahram Mobasher, from the University of California, Riverside. “This was therefore a likely ancestor of massive and evolved galaxies seen today.”


An international team of astronomers, led by Masami Ouchi of the University of Tokyo, Japan, first identified the remote galaxy after scanning a large patch of sky with the Subaru Telescope atop Mauna Kea in Hawaii. Its great distance was then confirmed with the W.M. Keck Observatory, also on Mauna Kea. Then, infrared observations from the Spitzer and Hubble space telescopes were crucial for measuring the galaxy’s star-formation activity.

“We checked our results on three different occasions over two years, and each time confirmed the previous measurement,” said Yoshiaki Ono, also from the of the University of Tokyo.

Astronomers were surprised to see such a large burst of star formation because the galaxy is so small and from such an early cosmic era. Back when galaxies were first forming, in the first few hundreds of millions of years after the Big Bang, they were much smaller than they are today, having yet to bulk up in mass.

The team says the galaxy’s star production rate is equivalent to about 100 suns per year. For reference, our Milky Way galaxy is about five times larger and 100 times more massive than GN-108036, but makes roughly 30 times fewer stars per year.

Astronomers refer to the object’s distance by a number called its “redshift,” which relates to how much its light has stretched to longer, redder wavelengths due to the expansion of the universe. Objects with larger redshifts are farther away and are seen further back in time. GN-108036 has a redshift of 7.2. Only a handful of galaxies have confirmed redshifts greater than 7, and only two of these have been reported to be more distant than GN-108036.

About 380,000 years after the Big Bang, a decrease in the temperature of the Universe caused hydrogen atoms to permeate the cosmos and form a thick fog that was opaque to ultraviolet light, creating what astronomers call the cosmic dark ages.

“It ended when gas clouds of neutral hydrogen collapsed to generate stars, forming the first galaxies, which probably radiated high-energy photons and reionized the Universe,” Mobasher said. “Vigorous galaxies like GN-108036 may well have contributed to the reionization process, which is responsible for the transparency of the Universe today.”

“The discovery is surprising because previous surveys had not found galaxies this bright so early in the history of the universe,” said Mark Dickinson of the National Optical Astronomy Observatory in Tucson, Ariz. “Perhaps those surveys were just too small to find galaxies like GN-108036. It may be a special, rare object that we just happened to catch during an extreme burst of star formation.”

Sources: Science Paper by: Y. Ono et al., Subaru , Spitzer Hubble

A New Look at the Milky Way’s Central Bar

The BRAVA fields are shown in this image montage. For reference, the center of the Milky Way is at coordinates L= 0, B=0. The regions observed are marked with colored circles. This montage includes the southern Milky Way all the way to the horizon, as seen from CTIO. The telescope in silhouette is the CTIO Blanco 4-m. (Just peaking over the horizon on the left is the Large Magellanic Cloud, the nearest external galaxy to our own.) Image Credit: D. Talent, K. Don, P. Marenfeld & NOAO/AURA/NSF and the BRAVA Project

[/caption]You may have heard about the restaurant at the end of the Universe, but have you heard of the bar in the middle of the Milky Way?

Nearly 80 years ago, astronomers determined that our home, the Milky Way Galaxy, is a large spiral galaxy. Despite being stuck inside and not being able to see what the entire the structure looks like — as we can with the Pinwheel Galaxy, or our nearest neighbor, the Andromeda Galaxy — researchers have suspected our galaxy is actually a “barred” spiral galaxy. Barred spiral galaxies feature an elongated stellar structure , or bar, in the middle which in our case is hidden by dust and gas. There are many galaxies in the Universe that are barred spirals, and yet, there are numerous galaxies which do not feature a central bar.

How do these central bars form, and why are they only present in some, but not all spiral galaxies?

A research team led by Dr. R. Michael Rich (UCLA), dubbed BRAVA (Bulge Radial Velocity Assay), measured the velocity of many old, red stars near the center of our galaxy. By studying the spectra (combined light) of the M class giant stars, the team was able to calculate the velocity of each star along our line of sight. During a four-year time span, the spectra for nearly 10,000 stars was acquired with the CTIO Blanco 4-meter telescope located in Chile’s Atacama desert.

Analyzing the velocities of stars in their study, the team was able to confirm that the Milky Way’s central bulge does contain a massive bar, with one end nearly pointed right at our solar system. One other discovery made by the team is that while our galaxy rotates like a wheel, the BRAVA study found that the rotation of the central bar is more like that of a roll of paper towels in a dispenser. The team’s discoveries provide vital clues to help explain the formation of the Milky Way’s central region.

BRAVA data. Image Credit: D. Talent, K. Don, P. Marenfeld & NOAO/AURA/NSF and the BRAVA Project

The spectra data set was compared to a computer simulation created by Dr. Juntai Shen (Shanghai Observatory) showing how the bar formed from a pre-existing disk of stars. The team’s data fits the model quite well, suggesting that before the central bar existed, there was a massive disk of stars. The conclusion reached by the team is in stark contrast to the commonly accepted model of formation of our galaxy’s central region – a model that predicts the Milky Way’s central region formed from an early chaotic merger of gas clouds. The “take-away” point from the team’s conclusions is that gas did play some role in the formation of our galaxy’s central region, which organized into a massive rotating disk, and then turned into a bar due to the gravitational interactions of the stars.

One other benefit to the team’s research is that stellar spectra data will allow the team to analyze the chemical composition of the stars. All stars are composed of mostly hydrogen and helium, but the tiny amounts of other elements (astronomers refer to anything past helium as “metals”) provides insight into the conditions present during a star’s formation.

The BRAVA team found that stars closest to the plane of the Milky Way Galaxy have fewer “metals” than stars further from its galactic plane. The team’s conclusion does confirm standard views of stellar formation, yet the BRAVA data covers a significant area of the galactic bulge that can be chemically analyzed. If researchers map the metal content of stars throughout the Milky Way, a clear picture of stellar formation and evolution emerges, similar to how mapping CO2 concentrations in the Antarctic ice shelf can reveal the past weather patterns here on Earth.

If you’d like to read the full paper, a pre-print version is available at: http://arxiv.org/abs/1112.1955

Source: National Optical Astronomy Observatory press release

Underwater Neutrino Detector Will Be Second-Largest Structure Ever Built

Artist's rendering of the KM3NeT array. (Marco Kraan/Property KM3NeT Consortium)

[/caption]

The hunt for elusive neutrinos will soon get its largest and most powerful tool yet: the enormous KM3NeT telescope, currently under development by a consortium of 40 institutions from ten European countries. Once completed KM3NeT will be the second-largest structure ever made by humans, after the Great Wall of China, and taller than the Burj Khalifa in Dubai… but submerged beneath 3,200 feet of ocean!

KM3NeT – so named because it will encompass an area of several cubic kilometers – will be composed of lengths of cable holding optical modules on the ends of long arms. These modules will stare at the sea floor beneath the Mediterranean in an attempt to detect the impacts of neutrinos traveling down from deep space.

Successfully spotting neutrinos – subatomic particles that don’t interact with “normal” matter very much at all, nor have magnetic charges – will help researchers to determine which direction they originated from. That in turn will help them pinpoint distant sources of powerful radiation, like quasars and gamma-ray bursts. Only neutrinos could make it this far and this long after such events since they can pass basically unimpeded across vast cosmic distances.

“The only high energy particles that can come from very distant sources are neutrinos,” said Giorgio Riccobene, a physicist and staff researcher at the National Institute for Nuclear Physics. “So by looking at them, we can probe the far and violent universe.”

Each Digital Optical Module (DOM) is a standalone sensor module with 31 3-inch PMTs in a 17-inch glass sphere.

In effect, by looking down beneath the sea KM3NeT will allow scientists to peer outward into the Universe, deep into space as well as far back in time.

The optical modules dispersed along the KM3NeT array will be able to identify the light given off by muons when neutrinos pass into the sea floor. The entire structure would have thousands of the modules (which resemble large versions of the hovering training spheres used by Luke Skywalker in Star Wars.)

In addition to searching for neutrinos passing through Earth, KM3NeT will also look toward the galactic center and search for the presence of neutrinos there, which would help confirm the purported existence of dark matter.

Read more about the KM3NeT project here, and check out a detailed article on the telescope and neutrinos on Popsci.com.

Height of the KM3NeT telescope structure compared to well-known buildings

Images property of KM3NeT Consortium 

Looking at Early Black Holes with a ‘Time Machine’

The large scale cosmological mass distribution in the simulation volume of the MassiveBlack. The projected gas density over the whole volume ('unwrapped' into 2D) is shown in the large scale (background) image. The two images on top show two zoom-in of increasing factor of 10, of the regions where the most massive black hole - the first quasars - is formed. The black hole is at the center of the image and is being fed by cold gas streams. Image Courtesy of Yu Feng.

[/caption]

What fed early black holes enabling their very rapid growth? A new discovery made by researchers at Carnegie Mellon University using a combination of supercomputer simulations and GigaPan Time Machine technology shows that a diet of cosmic “fast food” (thin streams of cold gas) flowed uncontrollably into the center of the first black holes, causing them to be “supersized” and grow faster than anything else in the Universe.

When our Universe was young, less than a billion years after the Big Bang, galaxies were just beginning to form and grow. According to prior theories, black holes at that time should have been equally small. Data from the Sloan Digital Sky Survey has shown evidence to the contrary – supermassive black holes were in existence as early as 700 million years after the Big Bang.

“The Sloan Digital Sky Survey found supermassive black holes at less than 1 billion years. They were the same size as today’s most massive black holes, which are 13.6 billion years old,” said Tiziana Di Matteo, associate professor of physics (Carnegie Mellon University). “It was a puzzle. Why do some black holes form so early when it takes the whole age of the Universe for others to reach the same mass?”

Supermassive black holes are the largest black holes in existence – weighing in with masses billions of times that of the Sun. Most “normal” black holes are only about 30 times more massive than the Sun. The currently accepted mechanism for the formation of supermassive black holes is through galactic mergers. One problem with this theory and how it applies to early supermassive black holes is that in early Universe, there weren’t many galaxies, and they were too distant from each other to merge.

Rupert Croft, associate professor of physics (Carnegie Mellon University) remarked, “If you write the equations for how galaxies and black holes form, it doesn’t seem possible that these huge masses could form that early, But we look to the sky and there they are.”

In an effort to understand the processes that formed the early supermassive black holes, Di Matteo, Croft and Khandai created MassiveBlack – the largest cosmological simulation to date. The purpose of MassiveBlack is to accurately simulate the first billion years of our universe. Describing MassiveBlack, Di Matteo remarked, “This simulation is truly gigantic. It’s the largest in terms of the level of physics and the actual volume. We did that because we were interested in looking at rare things in the universe, like the first black holes. Because they are so rare, you need to search over a large volume of space”.

Croft and the team started the simulations using known models of cosmology based on theories and laws of modern day physics. “We didn’t put anything crazy in. There’s no magic physics, no extra stuff. It’s the same physics that forms galaxies in simulations of the later universe,” said Croft. “But magically, these early quasars, just as had been observed, appear. We didn’t know they were going to show up. It was amazing to measure their masses and go ‘Wow! These are the exact right size and show up exactly at the right point in time.’ It’s a success story for the modern theory of cosmology.”

The data from MassiveBlack was added to the GigaPan Time Machine project. By combining the MassiveBlack data with the GigaPan Time Machine project, researchers were able to view the simulation as if it was a movie – easily panning across the simulated universe as it formed. When the team noticed events which appeared interesting, they were also able to zoom in to view the events in greater detail than what they could see in our own universe with ground or space-based telescopes.

When the team zoomed in on the creation of the first supermassive black holes, they saw something unexpected. Normal observations show that when cold gas flows toward a black hole it is heated from collisions with other nearby gas molecules, then cools down before entering the black hole. Known as ‘shock heating’, the process should have stopped early black holes from reaching the masses observed. Instead, the team observed thin streams of cold dense gas flowing along ‘filaments’ seen in large-scale surveys that reveal the structure of our universe. The filaments allowed the gas to flow directly into the center of the black holes at incredible speed, providing them with cold, fast food. The steady, but uncontrolled consumption provided a mechanism for the black holes to grow at a much faster rate than their host galaxies.

The findings will be published in the Astrophysical Journal Letters.

If you’d like to read more, check out the papers below ( via Physics arXiv ):
Terapixel Imaging of Cosmological Simulations
The Formation of Galaxies Hosting z~6 Quasars
Early Black Holes in Cosmological Simulations
Cold Flows and the First Quasars

Learn more about Gigapan and MassiveBlack at: http://gigapan.org/gigapans/76215/ and http://www.psc.edu/science/2011/supermassive/

Source: Carnegie Mellon University Press Release

Sagittarius Dwarf Galaxy – A Beast With Four Tails?

A map of the sky showing the numbers of stars counted in the Sagittarius streams. The dotted red lines trace out the Sagittarius streams, and the blue ellipses in the center show the current location of the Sagittarius Dwarf Galaxy. Image credit: S. Koposov and the SDSS-III collaboration

[/caption]

Galactic interactions can have big effects on the shapes of the disks of galaxies. So what happens when a small galaxy intermingles with the outer part of our own larger Milky Way Galaxy? It’s not pretty, as rivers of stars are being sheared off from a neighboring dwarf galaxy, Sagittarius, according to research by a team of astronomers led by Sergey Koposov and Vasily Belokurov (University of Cambridge).

Analyzing data from the latest Sloan Digital Sky Survey (SDSS-III), the team found two streams of stars in the Southern Galactic hemisphere that were torn off Sagittarius dwarf galaxy. This new discovery also connects newly found streams with two previously discovered streams in the Northern Galactic hemisphere.

Describing the phenomenon, Koposov said, “We have long known that when small dwarf galaxies fall into bigger galaxies, elongated streams, or tails, of stars are pulled out of the dwarf by the enormous tidal field.”

Wyn Evans, one of the other team members commented, “Sagittarius is like a beast with four tails.”

At one time, the Sagittarius dwarf galaxy was one of the brightest of our Galaxy’s satellites. Now its remains are on the other side of our Galaxy, and in the process of being broken apart by immense tidal forces. Estimates show that the Sagittarius dwarf galaxy lost half its stars and gas over the past billion years.

Before the SDSS-III data analysis, it was known that Sagittarius had two tails – one in front of and one behind the remnant. This discovery was made by using previous SDSS imaging, specifically a 2006 study which found the Sagittarius tidal tail in the Northern Galactic sky appears to be split in two.

Commenting on the previous discovery, Belokurov added, “That was an amazing discovery, but the remaining piece of the puzzle, the structure in the South, was missing until now.”

Analyzing density maps of over 13 million stars in the SDSS-III data, Koposov and his team found that the Sagittarius stream in the South is also split into two. One stream is thicker and brighter, while the other is thinner and fainter. According to the paper, the fainter stream is simpler and more metal-poor, while the brighter stream is more complex and metal-rich.

The deduction makes sense since each successive generation of stars will create and distribute (via supernovae) more metals into the next generation of star formation.

An artist's impression of the four tails of the Sagittarius Dwarf Galaxy (the orange clump on the left of the image) orbiting the Milky Way. The bright yellow circle to the right of the galaxy's center is our Sun (not to scale). Image credit: Amanda Smith (University of Cambridge)

While the exact cause of the tidal tail split is unknown, astronomers believe that the Sagittarius dwarf may have been part of a binary galactic system, much like the Large and Small Magellanic Clouds, visible in our Southern hemisphere. Despite the nature of the tidal tail split being presently unknown, astronomers have known that over time, many smaller galaxies have been torn apart or absorbed by our Milky Way Galaxy, as well as other galaxies in the Universe.

The movie (below) shows multiple streams produced by the disruption of the Sagittarius dwarf galaxy in the Milky Way halo. Our Sun is depicted by the orange sphere. The Sagittarius dwarf galaxy is in the middle of the stream. The area shown in the movie is roughly 200,000 parsecs (about 600,000 light-years.) Movie credit: S. Koposov and the SDSS-III collaboration.

If you’d like to learn more, you can read the full scientific paper at: arxiv.org

Source: SDSS press release, arXiv paper #1111.7042