Astronomy Without A Telescope – Star Seeds

The Rho Ophiuchi cloud complex - within which cloud L1688 is the most active star-forming location. Credit NASA.

[/caption]

Molecular clouds are called so because they have sufficient density to support the formation of molecules, most commonly H2 molecules. Their density also makes them ideal sites for new star formation – and if star formation is prevalent in a molecular cloud, we tend to give it the less formal title of stellar nursery.

Traditionally, star formation has been difficult to study as it takes place within thick clouds of dust. However, observation of far-infrared and sub-millimetre radiation coming out of molecular clouds allows data to be collected about prestellar objects, even if they can’t be directly visualized. Such data are drawn from spectroscopic analysis – where spectral lines of carbon monoxide are particularly useful in determining the temperature, density and dynamics of prestellar objects.

Far-infrared and sub-millimetre radiation can be absorbed by water vapor in Earth’s atmosphere, making astronomy at these wavelengths difficult to achieve from sea level – but relatively easy from low humidity, high altitude locations such as Mauna Kea Observatory in Hawaii.

Simpson et al undertook a sub-millimeter study of the molecular cloud L1688 in Ophiuchus, particularly looking for protostellar cores with blue asymmetric double (BAD) peaks – which signal that a core is undergoing the first stages of gravitational collapse to form a protostar. A BAD peak is identified through Doppler-based estimates of gas velocity gradients across an object. All this clever stuff is done via the James Clerk Maxwell Telescope in Mauna Kea, using ACSIS and HARP – the Auto-Correlation Spectral Imaging System and the Heterodyne Array Receiver Programme.

A sample of protostellar cores from cloud L1688 in Ophiuchus. Cores with signature blue asymmetric double (BAD) peaks, indicating gas infall due to gravitational collapse, are all on the right side of the Jeans Instability line. This plot enables the likely evolutionary path of protostellar cores to be estimated. Credit: Simpson et al.

The physics of star formation are not completely understood. But, presumably due to a combination of electrostatic forces and turbulence within a molecular cloud, molecules begin to aggregate into clumps which perhaps merge with adjacent clumps until there is a collection of material substantial enough to generate self-gravity.

From this point, a hydrostatic equilibrium is established between gravity and the gas pressure of the prestellar object – although as more matter is accreted, self-gravity increases. Objects can be sustained within the Bonnor-Ebert mass range – where more massive objects in this range are smaller and denser (High Pressure in the diagram). But as mass continues to climb, the Jeans Instability Limit is reached where gas pressure can no longer withstand gravitational collapse and matter ‘infalls’ to create a dense, hot protostellar core.

When the core’s temperature reaches 2000 Kelvin, H2 and other molecules dissociate to form a hot plasma. The core is not yet hot enough to drive fusion but it does radiate its heat – establishing a new hydrostatic equilibrium between outward thermal radiation and inward gravitational pull. At this point the object is now officially a protostar.

Being now a substantial center of mass, the protostar is likely to draw a circumstellar accretion disk around it. As it accretes more material and the core’s density increases further, deuterium fusion commences first – followed by hydrogen fusion, at which point a main sequence star is born.

Further reading: Simpson et al The initial conditions of isolated star formation – X. A suggested evolutionary diagram for prestellar cores.

Astronomy Without A Telescope – Oh-My-God Particles

Centaurus A - the closest galaxy with an active galactic nucleus - a mere 10-16 million light years away. Now I wonder where all those ultra high energy cosmic rays are coming from. Hmmm...

[/caption]

Cosmic rays are really sub-atomic particles, being mainly protons (hydrogen nuclei) and occasionally helium or heavier atomic nuclei and very occasionally electrons. Cosmic ray particles are very energetic as a result of them having a substantial velocity and hence a substantial momentum.

The Oh-My-God particle detected over Utah in 1991 was probably a proton traveling at 0.999 (and add another 20 x 9s after that) of the speed of light and it allegedly carried the same kinetic energy as a baseball traveling at 90 kilometers an hour.

Its kinetic energy was estimated at 3 x 1020 electron volts (eV) and it would have had the collision energy of 7.5 x 1014 eV when it hit an atmospheric particle – since it can’t give up all its kinetic energy in the collision. Fast moving debris carries some of it away and there’s some heat loss too. In any case, this is still about 50 times the collision energy we expect the Large Hadron Collider (LHC) will be able to generate at full power. So, this gives you a sound basis to scoff at doomsayers who are still convinced that the LHC will destroy the Earth.

Now, most cosmic ray particles are low energy, up to 1010 eV – and arise locally from solar flares. Another more energetic class, up to 1015 eV, are thought originate from elsewhere in the galaxy. It’s difficult to determine their exact source as the magnetic fields of the galaxy and the solar system alter their trajectories so that they end up having a uniform distribution in the sky – as though they come from everywhere.

But in reality, these galactic cosmic rays probably come from supernovae – quite possibly in a delayed release process as particles bounce back and forth in the persisting magnetic field of a supernova remnant, before being catapulted out into the wider galaxy.

And then there are extragalactic cosmic rays, which are of the Oh-My-God variety, with energy levels exceeding 1015 eV, even rarely exceeding 1020 eV – which are more formally titled ultra-high-energy cosmic rays. These particles travel very close to the speed of light and must have had a heck of kick to attain such speeds.

Left image: The energy spectrum of cosmic rays approaching Earth. Cosmic rays with low energies come in large numbers from solar flares (yellow range). Less common, but higher energy cosmic rays originating from elsewhere in the galaxy are in the blue range. The least common but most energetic extragalactic cosmic rays are in the purple range. Right image: The output of the active galactic nucleus of Centaurus A dominates the sky in radio light - this is its apparent size relative to the full Moon. It is likely that nearly all extragalactic cosmic rays that reach Earth originate from Centaurus A.

However, a perhaps over-exaggerated aura of mystery has traditionally surrounded the origin of extragalactic cosmic rays – as exemplified in the Oh-My-God title.

In reality, there are limits to just how far away an ultra-high-energy particle can originate from – since, if they don’t collide with anything else, they will eventually come up against the Greisen–Zatsepin–Kuzmin (GZK) limit. This represents the likelihood of a fast moving particle eventually colliding with a cosmic microwave background photon, losing momentum energy and velocity in the process. It works out that extragalactic cosmic rays retaining energies of over 1019 eV cannot have originated from a source further than 163 million light years from Earth – a distance known as the GZK horizon.

Recent observations by the Pierre Auger Observatory have found a strong correlation between extragalactic cosmic rays patterns and the distribution of nearby galaxies with active galactic nuclei. Biermann and Souza have now come up with an evidence-based model for the origin of galactic and extragalactic cosmic rays – which has a number of testable predictions.

They propose that extragalactic cosmic rays are spun up in supermassive black hole accretion disks, which are the basis of active galactic nuclei. Furthermore, they estimate that nearly all extragalactic cosmic rays that reach Earth come from Centaurus A. So, no huge mystery – indeed a rich area for further research. Particles from an active supermassive black hole accretion disk in another galaxy are being delivered to our doorstep.

Further reading: Biermann and Souza On a common origin of galactic and extragalactic cosmic rays.

Astronomy Without A Telescope – Enough With The Dark Already

It's confirmed that the universe is expanding with a uniform acceleration. Dark energy... not so much. Credit: Swinburne University.

[/caption]

The recent WiggleZ galaxy survey data further confirming that the universe is expanding with a uniform acceleration prompted a lot of ‘astronomers confirm dark energy’ headlines and a lot of heavy sighs from those preferring not to have the universe described in ten words or less.

I mean how the heck did ‘dark energy’ ever become shorthand for ‘the universe is expanding with a uniform acceleration’?

These ‘dark energy confirmed’ headlines risk developing a popular view that the universe is some kind of balloon that you have to pump energy into to make it expand. This is not an appropriate interpretation of the dark energy concept – which only came into common use after 1998 when Type 1a supernova data were announced, suggesting an accelerating expansion of the universe.

It was widely accepted well before then that the universe was expanding. A prevalent view before 1998 was that expansion might be driven by the outward momentum of the universe’s contents – a momentum possibly established from the initial cosmic inflation event that followed the Big Bang.

Current thinking on the expansion of the universe does not associate its expansion to the momentum of its contents. Instead the universe is thought of as raisin toast dough which expands in an oven – not because the raisins are pushing the dough outwards, but because the dough itself expands and as a consequence the distance between the raisins (i.e. galaxies etc) increases.

It’s not a perfect analogy since space-time is not a substance – and, at the level of a universe, the heat of the oven equates to the input of energy out of nowhere – and being thermal energy, it’s not dark.

Alternatively, you can model the universe as a perfect fluid where you think of dark energy as a negative pressure (since a positive pressure would compress the fluid). A negative pressure does not obviously require additional contents to be pumped into the fluid universe, although the physical nature of a ‘negative pressure’ in this context is yet to be explained.

Various possible shapes of the observable universe - where mass/energy density is too high, too low or just right (omega = 1), so that the geometry is Euclidean and the three angles of a triangle do add up to 180 degrees. Our universe does appear to have a flat Euclidean geometry, but it doesn't have enough visible mass/energy required to make it flat. Hence, we assume there must be a lot of dark stuff out there.

The requirement for dark energy in standard model cosmology is to sustain the observable flat geometry of space – which is presumed to be sustained by the mass-energy contents of the universe. Too much mass-energy should give a spherical shape to space, while too little mass-energy should give a hyperboloid shape.

So, since the universe is flat – and stays flat in the face of accelerating expansion, there must be a substantial ‘dark’ (i.e. undetectable) component. And it seems to be a component that grows as the universe increases in volume, in order to sustain that flat geometry – at least in current era of the universe’s evolution.

It is called ‘energy’ as it is evenly distributed (i.e. not prone to clumping, like dark matter), but otherwise it has no analogous properties with any form of energy that we know about.

More significantly, from this perspective, the primary requirement for dark energy is not as a driver of expansion, but as a hypothetical entity required to sustain the flatness of space in the face of expansion. This line of thinking then begs the question of just what does drive the accelerating expansion of the universe. And an appropriate answer to that question is – we haven’t a clue.

A plausible mechanism that accounts for the input of energy out of nowhere – and a plausible form of energy that is both invisible and that somehow generates the production of more space-time volume are all required to support the view that dark energy underlies the universe’s accelerating expansion.

Not saying it’s impossible, but no way has anyone confirmed that dark energy is real. Our flat universe is expanding with a uniform acceleration. For now, that is the news story.

Further reading:
Expansion of the universe
Shape of the universe

Twisted Ring Of Gas Orbits Galactic Center

A Herschel PACS (Photodetector Array Camera and Spectrometer) image of the center of the Milky Way. The dark line of cool gas is thought to be an elliptical ring surrounding the galactic center. The galaxy’s central supermassive black hole Sagittarius A* (Sgr A*) is labelled. The differential velocity of clouds in the ring may result from interaction with Sgr A*. Credit: ESA/Herschel/NASA/Molinari et al.

[/caption]

The Herschel Space Observatory scanned the center of the galaxy in far-infrared and found a cool (in all senses of the word) twisting ring of rapidly orbiting gas clouds. The ring is estimated to have dimensions of 100 parsecs by 60 parsecs (or 326 by 196 light years) – with a composite mass of 30 million solar masses.

The ring is proposed to oscillate twice about the galactic mid-plane for each orbit it makes of the galactic center – giving it the apparent shape of an infinite symbol when viewed from the side.

The research team speculate that the ring may be conforming to the shape of a standing wave – perhaps caused by the spin of the central galactic bulge and the lateral movement of gas across the galaxy’s large central bar. The researchers suggest that the combination of these forces may produce some kind of gravitational ‘sloshing’ effect, which would account for the unusual movement of the ring.

The estimated shape of the 100 by 60 parsec ring. Note the oscillating shape from a lateral perspective – and from above, note the ring encircles the supermassive black hole Sagittarius A*, but the black hole is not at its center. Credit: Molinari et al.

Although the ring is estimated to have an average orbital velocity of 10 to 20 kilometers a second, an area of dense cloud coming in close to the galaxy’s central supermassive black hole, Sagittarius A*, was clocked at 50 kilometers a second – perhaps due to its close proximity to Sagittarius A*.

However, the researchers also estimate that Sagittarius A* is well off-centre of the gas ring. Thus, the movement of the ring is dominated by the dynamics of the galactic bulge – rather than Sagittarius A*, which would only exert a significant gravitational influence within a few parsecs of itself.

Further reading: Molinari et al A 100 parsec elliptical and twisted ring of cold and dense molecular clouds revealed by Herschel around the galactic center.

Astronomy Without A Telescope – Holographic Dark Information Energy

The bubble nebula NGC 7635 - it doesn't have a lot to do with Holographic Dark Information Energy, but you always have to start these articles with an image. Credit: Croman/APOD Nov 7 2005.

[/caption]

Holographic Dark Information Energy gets my vote for the best mix of arcane theoretical concepts expressed in the shortest number of words – and just to keep it interesting, it’s mostly about entropy.

The second law of thermodynamics requires that the entropy of a closed system cannot decrease. So drop a chunk of ice in a hot bath and the second law requires that the ice melts and the bath water cools – moving the system from a state of thermal disequilibrium (low entropy) towards a state of thermal equilibrium (high entropy). In an isolated system (or an isolated bath) this process can only move in one direction and is irreversible.

A similar idea exists within information theory. Landauer’s principle has it that any logically irreversible manipulation of information, such as erasing one bit of information, equates to an increase in entropy.

So for example, if you keep photocopying the photocopy you just made of an image, the information in that image degrades and is eventually lost. But Landauer’s principle has it that the information is not so much lost, as converted into energy that is dissipated away by the irreversible act of copying a copy.

Translating this thinking into a cosmology, Gough proposes that as the universe expands and density declines, information-rich processes like star formation also decline. Or to put it in more conventional terms – as the universe expands, entropy increases since the energy density of the universe is being steadily dissipated across a greater volume. Also, there are less opportunities for gravity to generate low entropy processes like star formation.

The link between entropy and information - more interesting and information-rich things occur in low entropy states than in high entropy states.

So in an expanding universe there is a loss of information – and by Landauer’s principle this loss of information should release dissipated energy – and Gough claims that this dissipated energy accounts for the dark energy component of the current standard model of universe.

There are rational objections to this proposal. Landauer’s principle is really an expression of entropy in information systems – which can be mathematically modeled as though they were thermodynamic systems. It’s a bold claim to say this has a physical reality and a loss of information actually does release energy – and since Landauer’s principle expresses this as heat energy, wouldn’t it then be detectable (i.e. not dark)?

There is some experimental evidence of information loss releasing energy, but arguably it is just conversion of one form of energy to another – the information loss aspect of it just representing the transition from low to high entropy, as required by the second law of thermodynamics. Gough’s proposal requires that ‘new’ energy is introduced into the universe out of nowhere – although to be fair, that is pretty much what the current mainstream dark energy hypothesis requires as well.

Nonetheless, Gough alleges that the math of information energy does a much better job of accounting for dark energy than the traditional quantum vacuum energy hypothesis which predicts that there should be 120 orders of magnitude more dark energy in the universe than there apparently is.

Gough calculates that the information energy in the current era of the universe should be about 3 times its current mass-energy contents – which closely aligns with the current standard model of 74% dark energy + 26% everything else.

Invoking the holographic principle doesn’t add a lot to the physics of Gough’s argument – presumably it’s in there to make the math easier to manage by removing one dimension. The holographic principle has it that all the information about physical phenomena taking place within a 3D region of space can be contained on a 2D surface bounding that region of space. This, like information theory and entropy, is something that string theorists spend a lot of time grappling with – not that there’s anything wrong with that.

Further reading:
Gough Holographic Dark Information Energy.

Astronomy Without A Telescope – Small Bangs

Gamma ray bursts - have we really figured out all the science here? Credit: NASA.

[/caption]

Most gamma-ray bursts come in two flavors. Firstly, there are long duration bursts which form in dense star-forming regions and are associated with supernovae – which would understandably generate a sustained outburst of energy. The technical definition of a long duration gamma-ray burst is one that is more than two seconds in duration – but bursts lasting over a minute are not unusual.

Short duration gamma-ray bursts more often occur in regions of low star formation and are not associated with supernovae. Their duration is technically less than 2 seconds, but a duration of only a few milliseconds is not unusual. These are assumed to result from collisions between massive compact objects – perhaps neutron stars or black holes – producing a short, sharp outburst of energy.

But there are also rare instances of gamma-ray bursts that don’t really fill either category. GRB 060614 is such a beast – and has been referred to as a hybrid burst. It had a long duration (102 seconds) but was not associated with a supernova. This finding was significant enough to warrant an article in Nature – with the lead author Gehrels stating ‘This is brand new territory; we have no theories to guide us.’

We should be grateful that no-one decided to call it a dark burst. And we are yet to see another such confirmed hybrid gamma-ray burst that might verify whether these are hybrid bursts are really something extraordinary.

Nonetheless, Retter and Heller have suggested we should consider the possibility that GRB 060614 might have been a white hole. A white hole is a theoretical entity – and arguably just an artifact of the mathematics of general relativity. Assuming a black hole is an object from which nothing can escape – then its symmetrical opposite would be a white hole into which nothing can enter – but which can radiate light and from which matter can and does escape.

Arguably the whole idea just arises because general relativity abhors sharp edges. So the argument goes that the space-time continuum should ideally extend indefinitely – being curved by massive objects, but never brought to an edge. However, black holes represent a pinch in space-time where everything is supposedly dragged into a point-like singularity. So, one solution to this problem is to suggest that a black hole is not an interruption to the continuum, but instead the space-time around a black hole is drawn into a narrow-necked funnel – essentially a wormhole – which then feeds through to a white hole somewhere else.

Left image: The mysterious hybrid gamma ray burst GRB 060614. Right image: The 'what goes in must come out' model of white holes - where a black hole is connected to a white hole - and the white hole is time-reversed so that it expels material in the past. This was an initially proposed as a solution to explain quasars in the early universe, but better explanations have come along since (e.g. supermassive black holes with jets).

Being opposites, a black hole in the present would be connected to a white hole in the past – perhaps a white hole that existed in the early universe, emitting light and matter for a period and then exploding – kind of like a film of the formation of a black hole run backwards. It’s been suggested that such white holes might have created the first anisotropies in the early isotropic universe – creating the ‘clumpiness’ that later led to galaxies and galaxy clusters.

Alternatively, the Big Bang might be seen as the ultimate white hole which expelled a huge amount of mass/energy in one go – and any subsequent white holes might then be ‘lagging cores’ or Small Bangs.

There are substantial theoretical problems with white hole physics though – for example, the matter it ejects should immediately collapse back down on itself through self-gravity – meaning it just becomes a black hole anyway, or perhaps it explodes. If the latter possibility is correct, maybe this is one possible explanation of GRB 060614 seen back in 2006. But it’s probably best to wait for another hybrid burst to appear and get some more data before getting too carried away here.

Further reading:
Retter and Heller The Revival of White Holes as Small Bangs.
The mysterious GRB 060614.
You can apparently create a white hole in your kitchen sink.

Astronomy Without A Telescope – SLoWPoKES

Could assessing the orbital motion of red dwarf binaries offer support for fringe science? Probably not. Credit: NASA.

[/caption]

The Sloan Low-mass Wide Pairs of Kinematically Equivalent Stars (SLoWPoKES) catalog was recently announced, containing 1,342 common proper motion pairs (i.e. binaries) – which are all low mass stars in the mid-K and mid-M stellar classes – in other words, orange and red dwarves.

These low mass pairs are all at least 500 astronomical units distance from each other – at which point the mutual gravitation between the two objects gets pretty tenuous – or so Newton would have it. Such a context provides a test-bed for something that lies in the realms of ‘fringe science’ – that is, Modified Newtonian Dynamics, or MoND.

The origin of MoND theory is generally attributed to a paper by Milgrom in 1981, which proposed MoND as an alternative way to account for the dynamics of disk galaxies and galactic clusters. Such structures can’t obviously hold together, with the rotational velocities they possess, without the addition of ‘invisible mass’ – or what these days we call dark matter.

MoND seeks to challenge a fundamental assumption built into both Newton’s and Einstein’s theories of gravity – where the gravitational force (or the space-time curvature) exerted by a massive object recedes by the inverse square of the distance from it. Both theories assume this relationship is universal – it doesn’t matter what the mass is or what the distance is, this relationship should always hold.

In a roundabout way, MoND proposes a modification to Newton’s Second Law of Motion – where Force equals mass times acceleration (F=ma) – although in this context, a is actually representing gravitational force (which is expressed as an acceleration).

If a expresses gravitational force, then F expresses the principle of weight. So for example, you can easily exert a sufficient force to lift a brick off the surface of the Earth, but it’s unlikely that you will be able to lift a brick, with the same mass, off the surface of a neutron star.

Anyhow, the idea of MoND is that by allowing F=ma to have a non-linear relationship at low values of a, a very tenuous gravitational force acting across a great distance might still be able to hold something in a loose orbit around a galaxy, despite the principle of a linear F=ma relationship predicting that this shouldn’t happen.

Left image: The unusual flat curve (B) of velocities of objects in disk galaxies versus what would be expected by a naive application of Kepler's Third Law (A). Right image: A scatter plot of selected binaries from the SLoWPoKE catalogue (blue) plotted against the trend expected by Kepler's Third Law (red). Credit: Hernandez et al. (Author's note - Kepler's Third Law of Planetary Motion fits the context of the solar system where 99% of the mass is contained in the Sun. Its applicability to the motion of stars in a galactic disk, with a much more even mass distribution, is uncertain)

MoND is fringe science, an extraordinary claim requiring extraordinary evidence, since if Newton’s or Einstein’s theories of gravity cannot be assumed to universal, a whole bunch of other physical, astrophysical and cosmological principles start to unravel.

Also, MoND doesn’t really account for other observational evidence of dark matter – notably the gravitational lensing seen in different galaxies and galactic clusters – a degree of lensing that exceeds what is expected from the amount of visible mass that they contain.

In any case, Hernandez et al have presented a data analysis drawn from the SLoWPoKES database of widely spread low-mass binaries, suggestive that MoND might actually work at scales of around 7000 astronomical units. Now, since this hasn’t yet been picked up by Nature, Sci. Am. or anyone else of note – and since some hack writer at Universe Today is just giving it a ‘balanced’ review here, it may be premature to consider that a major paradigm of physics has been overturned.

Nonetheless, the concept of ‘missing mass’ and dark matter has been kicked around for close on 90 years now – with no-one seemingly any closer to determining what the heck this stuff is. On this basis, it is reasonable to at least entertain some alternate views.

Further reading:
Dhital et al Sloan Low-mass Wide Pairs of Kinematically Equivalent Stars (SLoWPoKES): A Catalog of Very Wide, Low-mass Pairs (note that this paper makes no reference to the issue of MoND).

Hernandez et al The Breakdown of Classical Gravity?

Photopic Sky Survey

The Photopic Sky Survey, the largest true-colour image of the night sky ever created (well, it is when you follow the link to the original 3600 rotatable image anyway). Credit: Risinger/Photopic Sky Survey.

[/caption]

The Photopic Sky Survey, the largest true-color all-sky survey – along with a constellation and star name overlay option – is available here.

For more detail on how it was created read on…

Nick Risinger decided to take a little break from work and embark on a 45,000 miles by air and 15,000 by land journey – along with his Dad, brother and a carload of astrophotography gear – to capture the biggest true color picture of the universe ever. As you do…

The requirement for the long journey is all about trying to snap the whole universe from the surface of a rotating planetary body in a solar orbit – and with a tilted axis yet. So what might be seen in the northern hemisphere isn’t always visible from the south. Likewise with the seasons, what may be overhead in the summer is below the horizon in the winter.

On top of that, there are issues of light pollution and weather to contend with – so you can’t just stop anywhere and snap away at the sky. Nonetheless, with a navigational computer to ensure accuracy and over the course of one year – Risinger broke the sky down into 624 areas (each 12 degrees wide) and captured each portion through 60 exposures. Four short, medium, and long shots with each of six cameras were taken to help reduce noise, satellite trails, and other inaccuracies.

Nick Risinger preparing an array of cameras in Colorado to shoot part of the five gigapixel Photopic Sky Survey image. Credit: Risinger/Photopic Sky Survey.

Further reading: Photopic Sky Survey home page (includes a description of the hardware and software used).

Astronomy Without A Telescope – Planet Spotting

Kepler's search area to find

[/caption]

The Extrasolar Planets Encyclopedia counted 548 confirmed extrasolar planets at 6 May 2011, while the NASA Star and Exoplanet Database (updated weekly) was today reporting 535. These are confirmed findings and the counts will significantly increase as more candidate exoplanets are assessed. For example, there were the 1,235 candidates announced by the Kepler mission in February, including 54 that may be in a habitable zone.

So what techniques are brought to bear to come up with these findings?

Pulsar timing – A pulsar is a neutron star with a polar jet roughly aligned with Earth. As the star spins and a jet comes into the line of sight of Earth, we detect an extremely regular pulse of light. Indeed, it is so regular that a slight wobble in the star’s motion, due to it possessing planets, is detectable.

The first extrasolar planets (i.e. exoplanets) were found in this way, actually three of them, around the pulsar PSR B1257+12 in 1992. Of course, this technique is only useful for finding planets around pulsars, none of which could be considered habitable – at least by current definitions – and, in all, only 4 such pulsar planets have been confirmed to date.

To look for planets around main sequence stars, we have…

The radial velocity method – This is similar in principle to detection via pulsar timing anomalies, where a planet or planets shift their star back and forth as they orbit, causing tiny changes in the star’s velocity relative to the Earth. These changes are generally measured as shifts in a star’s spectral lines, detectable via Doppler spectrometry, although detection through astrometry (direct detection of minute shifts in a star’s position in the sky) is also possible.

To date, the radial velocity method has been the most productive method for exoplanet detection (finding 500 of the 548), although it most frequently picks up massive planets in close stellar orbits (i.e. hot Jupiters) – and as a consequence these planets are over-represented in the current confirmed exoplanet population. Also, in isolation, the method is only effective up to about 160 light years from Earth – and only gives you the minimum mass, not the size, of the exoplanet.

To determine a planet’s size, you can use…

The transit method – The transit method is effective at both detecting exoplanets and determining their diameter – although it has a high rate of false positives. A star with a transiting planet, which partially blocks its light, is by definition a variable star. However, there are many different reasons why a star may be variable – many of which do not involve a transiting planet.

For this reason, the radial velocity method is often used to confirm a transit method finding. Thus, although 128 planets are attributed to the transit method – these are also part of the 500 counted for the radial velocity method. The radial velocity method gives you the planet’s mass – and the transit method gives you its size (diameter) – and with both these measures you can get the planet’s density. The planet’s orbital period (by either method) also gives you the distance of the exoplanet from its star, by Kepler’s (that is Johannes’) Third Law. And this is how we can determine whether a planet is in a star’s habitable zone.

It is also possible, from consideration of tiny variations in transit periodicity (i.e regularity) and the duration of transit, to identify additional smaller planets (in fact 8 have been found via this method, or 12 if you include pulsar timing detections). With increased sensitivity in the future, it may also be possible to identify exomoons in this way.

The transit method can also allow a spectroscopic analysis of a planet’s atmosphere. So, a key goal here is to find an Earth analogue in a habitable zone, then examine its atmosphere and monitor its electromagnetic broadcasts – in other words, scan for life signs.

Direct imaging of exoplanet Beta Pictoris b - assisted by nulling interferometry which removes Beta Pictoris' starlight from the image. The red flares are a circumstellar debris disk heated by the star. Credit: ESO.

To find planets in wider orbits, you could try…

Direct imaging – This is challenging since a planet is a faint light source near a very bright light source (the star). Nonetheless, 24 have been found this way so far. Nulling interferometry, where the starlight from two observations is effectively cancelled out through destructive interference, is an effective way to detect any fainter light sources normally hidden by the star’s light.

Gravitational lensing – A star can create a narrow gravitational lens and hence magnify a distant light source – and if a planet around that star is in just the right position to slightly skew this lensing effect, it can make its presence known. Such an event is relatively rare – and then has to be confirmed through repeated observations. Nonetheless, this method has detected 12 so far, which include smaller planets in wide orbits such as OGLE-2005-BLG-390Lb.

These current techniques are not expected to deliver a complete census of all planets within current observational boundaries, but do offer us an impression of how many there may be out there. It has been speculatively estimated from the scant data available so far, that there may be 50 billion planets within our galaxy. However, a number of definitional issues remain to be fully thought through, such as where you draw the line between a planet versus a brown dwarf. The Extrasolar Planets Encyclopedia currently set the limit at 20 Jupiter masses.

Anyhow, 548 confirmed exoplanets for only 19 years of planet spotting is not bad going. And the search continues.

Further reading:
The Extrasolar Planets Encyclopedia
The NASA Star and Exoplanet Database (NStED)
Methods of detecting extrasolar planets
The Kepler mission.

Astronomy Without A Telescope – Cosmic Magnetic Fields

The whirlpool galaxy with its magnetic field mapped by observing how distant radio light from pulsars is altered as it passes through the galaxy. Credit: MPIfR Bonn.

[/caption]

The mention of cosmic-scale magnetic fields is still likely to met with an uncomfortable silence in some astronomical circles – and after a bit of foot-shuffling and throat-clearing, the discussion will be moved on to safer topics. But look, they’re out there. They probably do play a role in galaxy evolution, if not galaxy formation – and are certainly a feature of the interstellar medium and the intergalactic medium.

It is expected that the next generation of radio telescopes, such as LOFAR (Low Frequency Array) and the SKA (Square Kilometre Array), will make it possible to map these fields in unprecedented detail – so even if it turns out that cosmic magnetic fields only play a trivial role in large-scale cosmology – it’s at least worth having a look.

At the stellar level, magnetic fields play a key role in star formation, by enabling a protostar to unload angular momentum. Essentially, the protostar’s spin is slowed by magnetic drag against the surrounding accretion disk – which allows the protostar to keep drawing in more mass without spinning itself apart.

At the galactic level, accretion disks around stellar-sized black holes create jets that inject hot ionised material into the interstellar medium – while central supermassive black holes may create jets that inject such material into the intergalactic medium.

Within galaxies, ‘seed’ magnetic fields may arise from the turbulent flow of ionised material, perhaps further stirred up by supernova explosions. In disk galaxies, such seed fields may then be further amplified by a dynamo effect arising from being drawn into the rotational flow of the whole galaxy. Such galactic scale magnetic fields are often seen forming spiral patterns across a disk galaxy, as well as showing some vertical structure within a galactic halo.

It is anticipated that next generation radio telescopes like the Square Kilometre Array will significantly enhance cosmic magnetic field research. Credit Swinburne AP.

Similar seed fields may arise in the intergalactic medium – or at least the intracluster medium. It’s not clear whether the great voids between galactic clusters would contain a sufficient density of charged particles to generate significant magnetic fields.

Seed fields in the intracluster medium might be amplified by a degree of turbulent flow driven by supermassive black hole jets but, in the absence of more data, we might assume that such fields maybe more diffuse and disorganised that those seen within galaxies.

The strength of intracluster magnetic fields averages around 3 x 10-6 gauss (G), which isn’t a lot. The Earth’s magnetic fields averages around 0.5 G and a refrigerator magnet is about 50 G. Nonetheless, these intracluster fields offer the opportunity to trace back past interactions between galaxies or clusters (e.g. collisions or mergers) – and perhaps to determine what role magnetic fields played in the early universe, particularly with respect to the formation of the first stars and galaxies.

Magnetic fields can be indirectly identified through a variety of phenomena:
• Optical light is partly polarised by the presence of dust grains which are drawn into a particular orientation by a magnetic field and then only let through light in a certain plane.
• At a larger scale, Faraday rotation comes into play, where the plane of already polarised light is rotated in the presence of a magnetic field.
• There’s also Zeeman splitting, where spectral lines – which normally identify the presence of elements such as hydrogen – may become split in light that has passed through a magnetic field.

Wide angle or all-sky surveys of synchrotron radiation sources (e.g. pulsars and blazars) allow measurement of a grid of data points, which may undergo Faraday rotation as a result of magnetic fields at the intergalactic or intracluster scale. It is anticipated the high resolution offered by the SKA will enable observations of magnetic fields in the early universe back to a redshift of about z =5, which gives you a view of the universe as it was about 12 billion years ago.

Further reading: Beck, R. Cosmic Magnetic Fields: Observations and Prospects.