Type II-P Supernovae as a New Standard Candle

[/caption]

Much of astronomical knowledge is built on the cosmic distance ladder. This ladder is built to determine distances to objects in our sky. Low lying rungs for nearby objects are used to calibrate the methodology for more distant objects which are, in turn, used to calibrate for more distant objects and so on. One of the reason so many runs need to be added is that techniques often become difficult to impossible to used past a certain distance. Cepheid Variables are a fantastic object to allow us to measure distances, but their luminosity is only sufficient to allow us to detect them to a few tens of millions of parsecs. As such, new techniques, based on brighter objects must be developed.

The most famous of these is the use of Type Ia Supernovae (ones that collapse just pass the Chandrasekhar limit) as “standard candles”. This class of objects has a well defined standard luminosity and by comparing its apparent brightness to the actual brightness, astronomers can determine distance via the distance modulus. But this relies on the fortuitous circumstance of having such an event occur when you want to know the distance! Obviously, astronomers need some other tricks up their sleeve for cosmological distances, and a new study discusses the possibility of using another type of supernova (SN II-P) as another form of standard candles.

Type II-P supernovae are classical, core-collapse supernovae that occur when the core of a star has passed the critical limit and can no longer support the mass of the star. But unlike other supernovae, the II-P decays more slowly, leveling off for some time creating a “plateau” in the light curve (which is where the “P” comes from). Although their plateaus are not all at the same brightness, making them initially useless as a standard candle, studies over the past decade have shown that observing other properties may allow astronomers to determine what the brightness of the plateau actually is and making these supernovae “standardizable”.

In particular, discussion has been centering recently around possible connections between the velocity of ejecta and the brightness of the plateau. A study published by D’Andrea et al. earlier this year attempted to link the absolute brightness to the velocities of the Fe II line at 5169 Angstroms. However, this method left large experimental uncertainties which translated to an error of up to 15% of the distance.

A new paper, to be published in October’s issue of the Astrophysical Journal, a new team, led by Dovi Poznanski of the Lawrence Berkley National Laboratory attempts to reduce these errors by utilizing the hydrogen beta line. One of the primary advantages to this is that hydrogen is much more plentiful allowing the hydrogen beta line to stand out whereas the Fe II lines tend to be weak. This improves the signal to noise (S/N) ratio and improves overall data.

Using data from the Sloan Digital Sky Survey (SDSS), the team was able to decrease the error in distance determination to 11%. Although this was an improvement over the D’Andrea et al. study, it is still significantly higher than many other methods for distance determination at similar distances. Poznanski suggests that this data is likely skewed due to a natural bias towards brighter supernovae. This systematic error stems from the fact that the SDSS data is supplemented up with follow-up data which the team employed, but the follow-ups are only conducted if the supernova meets certain brightness criteria. As such, their method is not fully representative of all supernovae of this type.

To improve their calibration and hopefully improve the method, the team plans to continue their study with expanded data from other studies that would be free of such biases. In particular the team intends to use the Palomar Transient Factory to supplement their results.

As the statistics improve, astronomers will gain another rung on the cosmological distance ladder, but only if they’re lucky enough to find one of this type of supernova.

The Thick Disk: Galactic Construction Project or Galactic Rejects?

Our Milky Way Gets a Makeover

[/caption]

The disk of spiral galaxies is comprised of two main components: The thin disk holds the majority of stars and gas and is the majority of what we see and picture when we think of spiral galaxies. However, hovering around that, is a thicker disk of stars that is much less populated. This thick disk is distinct from the thin disk in several regards: The stars there tend to be older, metal deficient, and orbit the center of the galaxy more slowly.

But where this population of the stars came from has been a long standing mystery since its identification in the mid 1970’s. One hypothesis is that it is the remainder of cannibalized dwarf galaxies that have never settled into a more standard orbit. Others suggest that these stars have been flung from the thin disk through gravitational slingshots or supernovae. A recent paper puts these hypothesis to the observational test.

At a first glance, both propositions seem to have a firm observational footing. The Milky Way galaxy is known to be in the process of merging with several smaller galaxies. As our galaxy pulls them in, the tidal effects shred these minor galaxies, scattering the stars. Numerous tidal streams of this sort have been discovered already. The ejection from the thin disk gains support from the many known “runaway” and “hypervelocity” stars which have sufficient velocity to escape the thin disk, and in some cases, the galaxy itself.

The new study, led by Marion Dierickx of Harvard, follows up on a 2009 study by Sales et al., which used simulations to examine the features stars would take in the thick disk should they be created via these methods. Through these simulations, Sales showed that the distribution of eccentricities of the orbits should be different and allow a method by which to discriminate between formation scenarios.

By using data from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7), Dierickx’s team compared the distribution of the stars in our own galaxy to the predictions made by the various models. Ultimately, their survey included some 34,000 stars. By comparing the histogram of eccentricities to that of Sales’ predictions, the team hoped to find a suitable match that would reveal the primary mode of creation.

The comparison revealed that, should ejection from the thin disk be the norm there were too many stars in nearly circular orbits as well as highly eccentric ones. In general, the distribution was too wide. However, the match for the scenario of mergers fit well lending strong credence to this hypothesis.

While the ejection hypothesis or others can’t be ruled out completely, it suggests that, at least in our own galaxy, they play a rather minor role. In the future, additional tests will likely be employed, analyzing other aspects of this population.

Follow-up Studies on June 3rd Jupiter Impact

[/caption]

Poor Jupiter just can’t seem to catch a break. Ever since 1994, when our largest planet was hit by Comet Shoemaker-Levy, detections of impacts on Jupiter have occurred with increasing regularity. Most recently, an impact was witnessed on August 20. On June 3rd of 2010, (coincidentally the same day pictures from Hubble were released from a 2009 impact) Jupiter was hit yet again. Shortly after the June 3rd impact, several other telescopes joined the observing.

A paper to appear in the October issue of The Astrophysical Journal Letters discusses the science that has been gained from these observations.

The June 3rd impact was novel in several respects. It was the first unexpected impact that was reported from two independent locations simultaneously. Both discoverers were observing Jupiter with aims of engaging in a bit of astrophotography. Their cameras were both set to take a series of quick images, each lasting a fifth to a tenth of a second. This short time duration is the first time astronomers have had the ability to recreate the light curve for the meteor. Additionally, both observers were using different filters (one red and one blue) allowing for exploration of the color distribution.

Analysis of the light curve revealed that the flash lasted nearly two seconds and was not symmetric; The decay in brightness occurred faster than the increase at onset. Additionally, the curve showed several distinct “bumps” which indicated a flickering that is commonly seen on meteors on Earth.

The light released in the burning up of the object was used to estimate the total energy-released and in turn the mass of the object.  The total energy released was estimated to be between roughly (1.0–4.0) × 1015 Joules (or 250–1000 kilotons).

Follow-up observations from Hubble three days later revealed no scars from the impact. In the July 2009 impact, a hole punched in the clouds remained for several days. This indicated the object in the June 3 impact was considerably smaller and burned up before it was able to reach the visible cloud decks.

Observations intended to find debris came up empty. Infrared observations showed that no thermal signature was left even as little as 18 hours following the discovery.

Assuming that the object was an asteroid with a relative speed of ~60 km/sec and a density of ~2 g/cm3, the team estimated the size of the object to be between 8 and 13 meters, similar to the size of the two asteroids that recently passed Earth. This represents the smallest meteor yet observed on Jupiter. An object of similar size was estimated to be responsible for the impact on Earth in 1994 near the Marshall Islands. Estimates “predict objects of this size to collide with our planet every 6–15 years” with significantly higher rates on Jupiter ranging from one to one hundred such events annually.

Clearly, amateur observations led to some fantastic science. Modest telescopes, “in the range 15–20 cm in diameter equipped with webcams and video recorders” can easily allow for excellent coverage of Jupiter and continued observation could help in determining the impact rate and lead to a better understanding of the population of such small bodies in the outer solar system.

Do Stars Really Form in Clusters?

The long standing view on the formation of stars is that they form in clusters. This theory is supported by understanding of the formation process that requires large clouds of gas and dust to be able to condense. Small clouds with enough mass to only form one star just can’t meet the required conditions to condense. In a large cloud, where conditions are sufficient, once one star begins, the feedback effects from this star will trigger other star formation. Thus, if you get one, you’ll likely get lots.

But a new paper takes a critical look at whether or not all stars really form in clusters.

The main difficulty in answering this question boils down to a simple question: What does it mean to be “in” a cluster. Generally, members of a cluster are stars that are gravitationally bound. But as time passes, most clusters shed members as gravitational interactions, both internal and external, remove outer members. This blurs the boundary between being bound and unbound.

Similarly, some objects that can initially look very similar to clusters can actually be groups known as an association. As the name suggests, while these stars are in close proximity, they are not truly bond together. Instead, their relative velocities will cause the the group to disperse without the need for other effects.

As a result, astronomers have considered other requirements to truly be a member of a cluster. In particular for forming stars, there is an expectation that cluster stars should be able to interact with one-another during the formation process.

Its these considerations that this new team uses as a basis, led by Eli Bressert from the University of Exeter. Using observations from Spitzer, the team analyzed 12 nearby star forming regions. By conducting the survey with Spitzer, an infrared telescope, the team was able to pierce the dusty veil that typically hides such young stars.

By looking at the density of the young stellar objects (YSOs) in the plane of the sky, the team attempted to determine just what portion of stars could be considered true cluster members under various definitions. As might be expected, the answer was highly dependent on the definition used. If a loose and inclusive definition was taken, they determined that 90% of YSOs would be considered as part of the forming cluster. However, if the definition was drawn at the narrow end, the percentage dropped as low as 40%. Furthermore, if the additional criterion of needing to be in such proximity that their “formation/evolution (along with their circumstellar disks and/or planets) may be affected by the close proximity of their low-mass neighbours”, the percentage dropped to a scant 26%.

As with other definition boundaries, the quibbling may seem little more than a distraction. However, with such largely varying numbers attached to them, these triflings carry great significance since inconsistent definitions can greatly distort the understanding. This study highlights the need for clarity in definitions for which astronomers constantly struggle in a muddled universe full overlapping populations and shades of gray.

Does Tidal Evolution Cause Stars to Eat Planets?

[/caption]

With the success of the Kepler mission, the viability of looking for planets via transits has reached maturity. However, Kepler is not the first intensive study. Previously, other observatories have employed transit searches. To increase the chances of discovery, studies often concentrated on large clusters in which thousands of stars could be observed simultaneously. Based on the percentage of stars with super Jovian planets in the Sun’s vicinity, a Hubble observation run on the globular cluster 47 Tuc expected to find roughly 17 “hot Jupiters”. Yet not a single one was found. Follow-up studies on other regions of 47 Tuc, published in 2005, also reported a similar lack of signals.

Could the subtle effect of tidal forces have caused the planets to be consumed by their parent stars?

Within our solar system, the effects of tidal influences are more subtle than planetary destruction. But on stars with massive planets in tight orbits, the effects can be very different. As a planet would orbit its parent star, its gravitational pull would pull the star’s photosphere towards it. In a frictionless environment, the raised bulge would remain directly under the planet. Since the real world has real friction, the bulge will be displaced.

If the star rotates slower than the planet orbits (a likely scenario for close in planets since stars slow themselves via magnetic breaking during formation), the bulge will trail behind the planet since the pull has to compete against the photospheric material through which its pulling. This is the same effect that happens between the Earth-Moon system and is why we don’t have tides whenever the moon is overhead, but rather, the tides occur some time later. This lagging bulge creates a component of the gravitational force opposed to the direction of motion of the planet, slowing it down. As time goes on, the planet gets dragged closer to the star by this torque which increases the gravitational force and accelerating the process until the planet eventually enters the star’s photosphere.

Since transit discoveries rely on the planets orbital plane being exactly in line with its parent star and our planet, this favors planets in a very tight orbit since planets further out are more likely to pass above or below their parent star when viewed from Earth. The result of this is that planets that could potentially be discovered by this method are especially prone to this tidal slowing and destruction. This effect with the combination of the old age of 47 Tuc, may explain the dearth of discoveries.

Using a Monte-Carlo simulation, a recent paper explores this possibility and finds that, with the tidal effects, the non-detection in 47 Tuc is completely accounted for without the need to include additional reasons (such as metal deficiency in the cluster). However, to go beyond simply explaining a null result, the team made several predictions that would serve to confirm the destruction of such planets. If a planet were wholly consumed, the heavier elements should be present in the atmospheres of their parent star and thus be detectable via their spectra in contrast with the overall chemical composition of the cluster. Planets that were tidally stripped of atmospheres by filling their Roche Lobes could still be detected as an excess of rocky, super Earths.

Another test could inolve comparison between several of the open clusters visible in the Kepler study. Should astronomers find a decrease in the probability of finding hot Jupiters corresponding with a decrease with cluster age, this would also confirm the hypothesis. Since several such clusters exist within the area planned for the Kepler survey, this option is the most readily accessible. Ultimately, this result make sit clear that, should astronomers rely on methods that are best suited for short period planets, they may need to expand their observation window sufficiently since planets with a sufficiently short period may be prone to being consumed.

The Other End of the Planetary Scale

[/caption]

The definition of a “planet” is one that has seen a great deal of contention. The ad-hoc redefinition has caused much grief for lovers of the demoted Pluto. Yet little attention is paid to the other end of the planetary scale, namely, where the cutoff between a star and a planet lies. The general consensus is that an object capable of supporting deuterium (a form of hydrogen that has a neutron in the nucleus and can undergo fusion at lower temperatures) fusion, is a brown dwarf while, anything below that is a planet. This limit has been estimated to be around 13 Jupiter masses, but while this line in the sand may seem clear initially, a new paper explores the difficulty in pinning down this discriminating factor. For many years, brown dwarfs were mythical creatures. Their low temperatures, even while undergoing deuterium fusion, made them difficult to detect. While many candidates were proposed as brown dwarfs, all failed the discriminating test of having lithium present in their spectrum (which is destroyed by the temperatures of traditional hydrogen fusion). This changed in 1995 when the first object of suitable mass was discovered when the 670.8 nm lithium line was discovered in a star of suitable mass.

Since then, the number of identified brown dwarfs has increased significantly and astronomers have discovered that the lower mass range of purported brown dwarfs seems to overlap with that of massive planets. This includes objects such as CoRoT-3b, a brown dwarf with approximately 22 Jovian masses, which exists in the terminological limbo.

The paper, led by David Speigel of Princeton, investigated a wide range of initial conditions for objects near the deuterium burning limit. Among the variables included, the team considered the initial fraction of helium, deuterium, and “metals” (everything higher than helium on the periodic table). Their simulations revealed that just how much of the deuterium burned, and how fast, was highly dependent on the starting conditions. Objects starting with higher helium concentration required less mass to burn a given amount of deuterium. Similarly, the higher the initial deuterium fraction, the more readily it fused. The differences in required mass were not subtle either. They varied by as much as two Jovian masses, extending as low as a mere 11 times the mass of Jupiter, well below the generally accepted limit.

The authors suggest that because of the inherent confusion in the mass limits, that such a definition may not be the “most useful delineation between planets and brown dwarfs.” As such, they recommend astronomers take extra care in their classifications and realize that a new definition may be necessary. One possible definition could involve considerations of the formation history of objects in the questionable mass range; Objects that formed in disks, around other stars would be considered planets, where objects that formed from gravitational collapse independently of the object they orbit, would be considered  brown dwarfs. In the mean time, objects such as CoRoT-3b, will continue to have their taxonomic categorization debated.

Aesthetics of Astronomy

This Hubble image reveals the gigantic Pinwheel Galaxy (M101), one of the best known examples of "grand design spirals," and its supergiant star-forming regions in unprecedented detail. Astronomers have searched galaxies like this in a hunt for the progenitors of Type Ia supernovae, but their search has turned up mostly empty-handed. Credit: NASA/ESA

[/caption]

When I tell people I majored in astronomy, the general reaction is one of shock and awe. Although people don’t realize just how much physics it is (which scares them even more when they found out), they’re still impressed that anyone would choose to major in a physical science. Quite often, I’m asked the question, “Why did you choose that major?”

Only somewhat jokingly, I reply, “Because it’s pretty.” For what reasons would we explore something if we did not find some sort of beauty in it? This answer also tends to steer potential follow up questions to topics of images they’ve seen and away from topics from half-heard stories about black holes from sci-fi movies.

The topic of aesthetics in astronomy is one I’ve used here for my own devices, but a new study explores how we view astronomical images and what sorts of information people, both expert and amateur, take from them.

The study was conducted by a group formed in 2008 known as The Aesthetics and Astronomy Group. It is comprised of astrophysicists, astronomy image development professionals, educators, and specialists in the aesthetic and cognitive perception of images. The group asked to questions to guide their study:

1. How much do variations in presentation of color, explanatory text, and illustrative scales affect comprehension of, aesthetic attractiveness, and time spent looking at deep space imagery?

2. How do novices differ from experts in terms of how they look at astronomical images?

Data to answer this question was taken from two groups; The first was an online survey taken by volunteers from solicitations on various astronomy websites and included 8866 respondents. The second group was comprised of four focus groups held at the Harvard-Smithsonian Center for Astrophysics.

To analyze how viewers viewed color, the web study contained two pictures of the elliptical galaxy NGC 4696. The images were identical except for the colors chosen to represent different temperatures. In one image, red was chosen to represent hot regions and blue for cold regions. In the other version, the color scheme was reversed. A slight majority (53.3% to 46.7%) responded saying they preferred the version in which blue was assigned to be the hotter color. When asked which image they thought was the “hotter” image, 71.5% responded that the red image was hotter. Since astronomical images are often assigned with blue as the hotter color (since hotter objects emit shorter frequency light which is towards the blue end of the visible spectrum), this suggests that the public’s perception of such images is likely reversed.

A second image for the web group divided the participants into 4 groups in which an image of a supernova remnant was shown with or without foreground stars and with or without a descriptive caption. When asked to rate the attractiveness, participants rated the one with text slightly higher (7.96 to 7.60 on a 10 point scale). Not surprisingly, those that viewed the versions of the image with captions were more likely to be able to correctly identify the object in the image. Additionally, the version of the image with stars was also more often identified correctly, even without captions, suggesting that the appearance of stars provides important context. Another question for this image also asked the size in comparison to the Earth, Solar System, and Galaxy. Although the caption gave the scale of the SNR in lightyears, the portion that viewed the caption did not fare better when asked to identify the size revealing such information is beyond the limit of usefulness.

The next portion showed an image of the Whirlpool galaxy, M51 and contained either, no text, a standard blurb, a narrative blurb, or a sectionized caption with questions as headers. Taking into consideration the time spent reading the captions, the team found that those with text spent more time viewing the image suggesting that accompanying text encourages viewers to take a second look at the image itself. The version with a narrative caption prompted the most extra time.

Another set of images explored the use of scales by superimposing circles representing the Earth, a circle of 300 miles, both, or neither onto an image of spicules on the Sun’s surface, with or without text. Predictably, those with scales and text were viewed longer and the image with both scales was viewed the longest and had the best responses on a true/false quiz over the information given by the image.

When comparing self-identified experts to novices, the study found that both viewed uncaptioned images for similar lengths of time, but for images with text, novices spent an additional 15 seconds reviewing the image when compared to experts. Differences between styles of presenting text (short blurb, narrative, or question headed), novices preferred the ones in which topics were introduced with questions, whereas experts rated all similarly which suggested they don’t care how the information is given, so long as it’s present.

The focus groups were given similar images, but were prompted for free responses in discussions.

[T]he non-professionals wanted to know what the colors represented, how the images were made, whether the images were composites from different satellites, and what various areas of the images were. They wanted to know if M101 could be seen with a home telescope, binoculars, or the naked eye.

Additionally, they were also interested in historical context and insights from what professional astronomers found interesting about the images.

Professionals, on the other hand, responded with a general pattern of “I want to know who made this image and what it was that they were trying to convey. I want to judge whether this image is doing a good job of telling me what it is they

wanted me to get out of this.” Eventually, they discussed the aesthetic nature of the images which reveals that “novices … work from aesthetics to science, and for astrophysicists … work from science to aesthetics.”

Overall, the study found an eager public audience that was eager to learn to view the images as not just pretty pictures, but scientific data. It suggested that a conversational tone that worked up to technical language worked best. These findings can be used to improve communication of scientific objectives in museums, astrophotography sections of observatories, and even in presentation of astronomical images and personal conversation.

Two New Asteroids to Pass Earth This Week

[/caption]

Two newly discovered asteroids will pass the Earth this week. The asteroids were discovered on September 5th of this year by Andrea Boattini using the 1.5 metre reflector at Mount Lemmon in Arizona as part of the Mount Lemmon Survey.

These two new asteroids have been given the designations of 2010 RF12 and 2010 RX30. Both are small bodies, which is why they were not discovered until mere days before they would pass the Earth. Estimates put the size of RF12 at 5 – 15 meters with a best estimate being around 8 meters (26 ft). The larger, RX30 is estimated to be 12 meters (39 ft), but the range of estimates go from 7 – 25.

Due to the large range of estimates on sizes, as well as poorly constrained relative velocities and an unknown composition, it would be difficult to predict the damage an impact from these bodies could cause. The majority of the mass for such small objects would burn up in the atmosphere with only small fragments surviving to the ground. For comparison, the estimated size of the object that caused the Tunguska event was estimated to be at least a few tens of meters in diameter at the point it exploded in the atmosphere some few miles up. Since the diameter helps to determine the volume, and thus the mass and kinetic energy, this factor increases the potential damage rapidly. However, although the bodies were just discovered this week, their orbits have already been well established for the near future and neither will collide with Earth. Both are rated at a 0 on the Torino scale (data from NASA’s NEO Program for RF12 and RX30 can be seen here and here respectively).

Although both objects will pass closer to the Earth than the moon, due to their small size, neither will be visible to the naked eye. 2010 RF12 is expected to pass the Earth at 21% of the Earth-moon distance and at maximum brightness, reach only 14th magnitude, which is just over 600 times too faint to see with the unaided eye. RX30 will approach at 66% of the Earth-moon distance and is expected to reach a similar peak magnitude. For those interested in tracking or photographing these objects, the Fawkes Telescope Project has created a page dedicated to these two objects, including best exposure times and filters for cameras that can be found here. Ephemeris for RF12 and RX30 can be found here and here respectively.

Although both of these asteroids were discovered on the same day and will be approaching near the same time, their orbits do not appear to be related. RF12’s orbit extends from 0.82 to 1.17 AU and it orbits the Sun once every year. Predictions have shown it only passes near the Earth once every one hundred years. Initially, RX30 was thought to be rotate extremely fast, but revised observations have shown that it takes at least 6 hours to rotate about its axis.

The Origin of Exoplanets

[/caption]

We truly live in an amazing time for exoplanet research. It was only 18 years ago the first planet outside our solar system was discovered. Fifteen since the first confirmation of one around a main sequence star. Even more recently, direct images have begun to sprout up, as well as the first spectra of the atmospheres of such planets. So much data is becoming available, astronomers have even begun to be able to make inferences as to how these extra solar planets could have formed.

In general, there are two methods by which planets can form. The first is via coaccretion in which the star and the planet would form from gravitational collapse independently of one another, but in close enough proximity that their mutual gravity binds them together in orbit. The second, the method through which our solar system formed, is the disk method. In this, material from a thin disk around a proto-star collapses to form a planet. Each of these processes has a different set of parameters that may leave traces which could allow astronomers to uncover which method is dominant. A new paper from Helmut Abt of Kitt Peak National Observatory, looks at these characteristics and determines that, from our current sampling of exoplanets, our solar system may be an oddity.

The first parameter that distinguishes the two formation methods is that of eccentricity. To establish a baseline for comparison, Abt first plotted the distribution of eccentricities for 188 main-sequence binary stars and compared that to the same type of plot for the only known system to have formed via the disk method (our Solar System). This revealed that, while the majority of stars have orbits with low eccentricity, this percentage falls off slowly as the eccentricity increases. In our solar system, in which only one planet (Mercury) has an eccentricity greater than 0.2, the distribution falls off much more steeply. When Abt constructed the distribution for the 379 planets with known eccentricity, it was nearly identical to that for binary stars.

A similar plot was created for the semi major axis of binary stars and our solar system. Again, when this was plotted for the known extra solar planets the distribution was similar to that of binary star systems.

Abt also inspected the configuration of the systems. Star systems containing three stars generally contained a pair of stars in a tight binary orbit with a third in a much larger orbit. By comparing the ratios of such orbits, Abt quantified the orbital spacing. However, instead of simply comparing to the solar system, he considered the analogous situation of formation of stars around the central mass of the galaxy and built a similar distribution in this manner. In this case, the results were ambiguous; Both modes of formation produced similar results.

Lastly, Abt considered the amount of heavy elements in the more massive body. It is widely known that most extra-solar planets are found around metal-rich stars. While there’s no reason planets forming in a disk couldn’t be formed around high mass stars, having a metal-rich cloud from which to form stars and planets is a requirement for the coaccretion model because it tends to accelerate the collapse process, allowing giant planets to fully form before the cloud was dissipated as the star became active. Thus, the fact that the vast majority of extra-solar planets exist around metal-rich stars favors the coaccretion hypothesis.

Taken together, this provides four tests for formation models. In every case, current observations suggest that the majority of planets discovered thus far formed from coaccretion and not in a disc. However, Abt notes that this is most likely due to statistical biases imposed by the sensitivity limits of current instruments. As he notes, astronomers “do not yet have the radial velocity sensitivity to detect disk systems like the solar system, except for single large planets, like Jupiter at 5 AU.” As such, this view will likely change as new generations of instruments become available. Indeed, as instruments improve to the point that three dimensional mapping becomes available, and orbital inclinations can be directly observed, astronomers will be able to add another test to determine the modes of formation.

EDIT: Following some confusion and discussion in the comments, I wanted to add one further note. Keep in mind this is only the average of all systems currently known that looks like coaccreted systems. While there are undoubtedly some in there that did form from disks, their rarity in the current data makes them not stand out. Certainly, we know of at least one system that fits a strong test for the disk method. This recent discovery by Kepler, in which three planets have been observed transiting their host star demonstrates that all of these planets must lie in a disk which does not conform to expectations of independent condensation. As more systems like this are discovered, we expect that the distributions of the tests described above will become bimodal, having components that match each formation hypothesis.

The Black Hole/Globular Cluster Correlation

[/caption]

Often in astronomy, one observable property traces another property which may be more difficult to observe directly; X-ray activity on stars can be used to trace turbulent heating of the photosphere. CO is used to trace cold H2. Sometimes these correlations make sense. Activities in stars produce the X-ray emissions. Other times, the tracer seems distantly related at best.

This is the case of a newly discovered correlation between the mass of the central black hole of galaxies and the number of globular clusters they contain. What can this relationship teach astronomers? Why does it hold for some types of galaxies better than others? And where does it come from in the first place.

The mass of a galaxy’s super massive black hole (SMBH) is known to have a strong relationship between many features of their host galaxies. It has identified to follow the range of velocities of stars in the galaxy, the mass and luminosity of the bulge of spiral galaxies, and the total amount of dark matter in galaxies. Because dark matter in the halo of galaxies and the luminosity have also been known to correspond to the number of globular clusters, Andreas Burkert of the Max-Planck-Institute for Extraterrestrial Physics in Germany, and Scott Tremaine at Princeton wondered if they could cut out the middlemen of dark matter and luminosity and still maintain a strong correlation between the central SMBH and the number of globular clusters.

Their initial investigation involved only 13 galaxies, but a follow-up study by Gretchen and William Harris and submitted to the Monthly Notices of the Royal Astronomical Society, increased the number of galaxies included in the survey to 33. The results of these studies indicated that for elliptical galaxies, the SMBH-GC relationship is evident. However, for lenticular galaxies there was no clear correlation. While there appeared to be a trend for classical spirals, the small number of data points (4) would not provide a strong statistical case independently, but did appear to follow the trend established by the elliptical galaxies.

Although the correlation appeared strong in most cases, about 10% of the galaxies included in the larger surveys were clear outliers. This included the Milky Way which has a SMBH mass that falls significantly short of the expectation from cluster number. One source of error the authors of the original study suspect is that it is possible that, in some cases, objects identified as globular clusters may have been misidentified and in actuality, be the cores of tidally stripped dwarf galaxies. Regardless, the relationship as it stands presently, seems to be quite strong and is even more tightly defined than that of the correlation between that of the SMBH mass and velocity dispersion that implied the potential relationship in the first place. The reason for the discordance in lenticular galaxies has not yet been explained and no reasons have yet been postulated.

But what of the cause of this unusual relation? Both sets of authors suggest the connection lies in the formation of the objects. While distinct in most respects, both are fed by major merger events; Black holes gain mass by accreting gas and globular clusters are often formed from the resulting shocks and interactions. Additionally, the majority of both types of objects formed at high redshifts.

Sources:

A correlation between central supermassive black holes and the globular cluster systems of early-type galaxies

The Globular Cluster/Central Black Hole Connection in Galaxies