Astronomy Without A Telescope – Solar Or RTG?

The 'edge of the envelope' solar powered Juno mission - scheduled for launch in 2011.

[/caption]

It used to be the case that if you wanted to send a spacecraft mission out past the asteroid belt, you’d need a chunk of plutonium-238 to generate electric power – like for Pioneers 10 and 11, Voyagers 1 and 2, Galileo, Cassini, even Ulysses which just did a big loop out and back to get a new angle on the Sun – and now New Horizons on its way to Pluto.

But in 2011, the Juno mission to Jupiter is scheduled for launch – the first outer planet exploration mission to be powered by solar panels. And also scheduled for 2011, in another break with tradition – Curiosity, the Mars Science Laboratory will be the first Mars rover to be powered by a plutonium-238 radioisotope thermoelectric generator – or RTG.

I mean OK, the Viking landers had RTGs, but they weren’t rovers. And the rovers (including Sojourner) had radioisotope heaters, but they weren’t RTGs.

So, solar or RTG – what’s best? Some commentators have suggested that NASA’s decision to power Juno with solar is a pragmatic one – seeking to conserve a dwindling supply of RTGs – which have a bit of a PR problem due to the plutonium.

However, if it works, why not push the limits of solar? Although some of our longest functioning probes (like the 33 year old Voyagers) are RTG powered, their long-term survival is largely a result of them operating far away from the harsh radiation of the inner solar system – where things are more likely to break down before they run out of power. That said, since Juno will lead a perilous life flying close to Jupiter’s own substantial radiation, longevity may not be a key feature of its mission.

Perhaps RTG power has more utility. It should enable Curiosity to go on roving throughout the Martian winter – and perhaps manage a range of analytical, processing and data transmission tasks at night, unlike the previous rovers.

With respect to power output, Juno’s solar panels would allegedly produce a whopping 18 kilowatts in Earth orbit, but will only manage 400 watts in Jupiter orbit. If correct, this is still on par with the output of a standard RTG unit – although a large spacecraft like Cassini can stack several RTG units together to generate up to 1 kilowatt.

So, some pros and cons there. Nonetheless, there is a point – which we might position beyond Jupiter’s orbit now – where solar power just isn’t going to cut it and RTGs still look like the only option.

Left image: a plutonium-238 ceramic pellet glowing red hot, like most concentrated ceramicised radioisotopes will do. Credit: Los Alamos National Laboratory. Right image: the Apollo 14 ALSEP RTG, almost identical to Apollo 13's RTG which re-entered Earth's atmosphere with the demise of the Aquarius lunar module. Credit: NASA

RTGs take advantage of the heat generated by a chunk of radioactive material (generally plutonium 238 in a ceramic form), surrounding it with thermocouples which use the thermal gradient between the heat source and the cooler outer surface of the RTG unit to generate current.

In response to any OMG it’s radioactive concerns, remember that RTGs travelled with the Apollo 12-17 crews to power their lunar surface experiment packages – including the one on Apollo 13 – which was returned unused to Earth with the lunar module Aquarius – the crew’s life boat until just before re-entry. Allegedly, NASA tested the waters where the remains of Aquarius ended up and found no trace of plutonium contamination – much as expected. It’s unlikely that its heat tested container was damaged on re-entry and its integrity was guaranteed for ten plutonium-238 half-lives, that is 900 years.

In any case, the most dangerous thing you can do with plutonium is to concentrate it. In the unlikely event that an RTG disintegrates on Earth re-entry and its plutonium is somehow dispersed across the planet – well, good. The bigger worry would be that it somehow stays together as a pellet and plonks into your beer without you noticing. Cheers.

Astronomy Without A Telescope – A Snowball’s Chance

Planets form by accreting material from a protoplanetary disk. New research suggests it can happen quickly, and that Earth may have formed in only a few million years. Credit: NASA/NASA/JPL-Caltech

[/caption]

Wanna build celestial objects? I mean it sounds easy – you just start with a big cloud of dust and give it a nudge so that it starts to spin and accrete and you end up with a star with a few wisps of dust left in orbit that continue to accrete to form planets.

Trouble is, this process doesn’t seem to be physically possible – or at least nothing like it can be replicated in standard theoretical models and laboratory simulations. There’s a problem with the initial small scale accretion steps.

Dust particles seem to stick readily together when they are very small – through van der Waals and electrostatic forces – steadily building up to form millimeter and even centimeter sized aggregates. But once they get to this size those sticky forces become less influential – and the objects are still too small to generate a meaningful amount of gravitational attraction. What interaction they do have is more in the nature of bouncing collisions – which most often result in pieces being chipped off the bouncing objects, so that they start getting smaller again.

This is an astrophysics problem known as the meter barrier.

But increasingly, theorists are coming up with ways to get around the meter barrier. Firstly, it may be a mistake to assume that you start with a uniform dust cloud, in which spontaneous accretion happens everywhere throughout the cloud.

Current thinking is that it may take a nearby supernova or a closely migrating star to trigger the evolution of a dust cloud into a stellar nursery. It’s possible that turbulence in a dust cloud creates whirlpools and eddies that favor the local aggregation of small particles into larger particles. So rather than going from a uniform dust cloud to a uniform collection of very small rocks – there is just a chance formation of accreted objects here and there.

Or we can just assume a certain stochastic inevitability about anything that has the faintest chance of happening – eventually happening. Over several million years, within a huge dust cloud that might be several hundred astronomical units in diameter, a huge variety of interactions becomes possible – and even with a 99.99% likelihood that no object can ever aggregate to a size bigger than a meter, it’s still entirely likely that this is going to happen somewhere in that vast area.

Either way, once you have a few seed objects, it’s hypothesised that the snowball process takes over. Once an aggregated object achieves a certain mass, its inertia will mean it becomes less engaged in turbulent flow. In other words, the object will begin to move through, rather than move with, the turbulent dust. Under these circumstances, it will behave like a snowball rolling down a snow covered hill, collecting a covering of dust as it plows through the dust cloud – increasing its diameter as it goes.

An artist's impression of HD 98800. The snowball process works even faster in protoplanetary disks around binary stars (at least on paper). Well, Tatooine must have formed somehow... Credit: JPL, NASA.

The time span required to build such snowballed planetesimals from a radius (Rsnow) of 100 meters up to 1000 kilometers is long. The modelling used suggests a time span (Tsnow) of between 1 and 10 million years is required.

It’s also possible to model planet formation around binary stars. Using orbital parameters equivalent to those of the binary system Alpha Centauri A and B, the snowball process is calculated to work more efficiently so that Tsnow is probably no more than 1 million years.

Once hundred kilometer-sized planetesimals have formed, they would still engage in collisions. But at this size, the objects generate substantial self-gravity and collisions are more likely to be constructive – eventually resulting in planets with their own orbiting debris, which then forms rings and moons.

There is evidence that some stars can form planets (at least gas giants) within 1 million years – such as GM Aurigae – while our solar system may have taken a more leisurely 100 million years from the Sun’s birth until the current collection of rocky, gassy and icy planets fully accreted out of the dust.

So, there’s more than a snowball’s chance in hell that that this theory may contribute to a better understanding of planet formation.

Further reading: Xie et al. From Dust To Planetesimal:The Snowball Phase?

Astronomy Without A Telescope – Dark Denial

The University of Chicago's Sunyaev-Zeldovich Array - searching for the point in time when dark energy became an important force in the evolution of the universe. Credit: Erik Leitch, University of Chicago.

[/caption]

A recent cosmological model seeks to get around the sticky issue of dark energy by jury-rigging the Einstein field equation so that the universe naturally expands in an accelerated fashion. In doing so, the model also eliminates the sticky issue of singularities – although this includes eliminating the singularity from which the Big Bang originated. Instead the model proposes that we just live in an eternal universe that kind of oscillates geometrically.

As other commentators have noted, this model hence fails to account for the cosmic microwave background. But hey, apart from that, the model is presented in a very readable paper that tells a good story. I am taking the writer’s word for it that the math works – and even then, as the good Professor Einstein allegedly stated: As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.

Like a number of alternate cosmological models, this one also requires the speed of light in a vacuum to vary over the evolution of the universe. It is argued that time is a product of universe expansion – and hence time and distance are mutually derivable – the conversion factor between the two being c – the speed of light. So, an accelerating expansion of the universe is just the result of a change in c – such that a unit of time converts to an increasing greater distance in space.

Yes, but…

The speed of light in a vacuum is the closest thing there is to an absolute in general relativity – and is really just a way of saying that electromagnetic and gravitational forces act instantaneously – at least from the frame of reference of a photon (and perhaps a graviton, if such a hypothetical particle exists).

It’s only from subluminal (non-photon) frames of reference that it becomes possible to sit back and observe, indeed even time with a stopwatch, the passage of a photon from point A to point B. Such subluminal frames of reference have only become possible as a consequence of the expansion of the universe, which has left in its wake an intriguingly strange space-time continuum in which we live out our fleetingly brief existences.

As far as a photon is concerned the passage from point A to point B is instantaneous – and it always has been. It was instantaneous around 13.7 billion years ago when the entire universe was much smaller than a breadbox – and it still is now.

But once you decide that the speed of light is variable, this whole schema unravels. Without an absolute and intrinsic speed for relatively instantaneous information transfer, the actions of fundamental forces must be intimately linked to the particular point of evolution that the universe happens to be at.

For this to work, information about the evolutionary status of the universe must be constantly relayed to all the constituents of the universe – or otherwise those constituents must have their own internal clock that refers to some absolute cosmic time – or those constituents must be influenced by a change in state of an all-pervading luminiferous ether.

In a nutshell, once you start giving up the fundamental constants of general relativity – you really have to give it all up.

The basic Einstein field equation. The left hand side of the equation describes space-time geometry (of the observable universe, for example) and the right hand side describes the associated mass-energy responsible for that curvature. If you want to add lambda (which these days we call dark energy) - you add it to the left hand side components.

The cosmological constant, lambda – which these days we call dark energy – was always Einstein’s fudge factor. He introduced it into his nicely balanced field equation to allow the modeling of a static universe – and when it became apparent the universe wasn’t static, he realized it had been a blunder. So, if you don’t like dark energy and you can do the math, this might be a better place to start.

Further reading: Wun-Yi Shu Cosmological Models with No Big Bang.

Astronomy Without A Telescope – Not So Ordinary

The Small and Large Magellanic Clouds - not the kind of things you usually find near large spiral galaxies. Cerro Tololo observatory, Credit: Fred Walker.

[/caption]

Sorry – a bit of southern sky bias in this one. But it does seem that our favourite down under naked eye objects are even more unique than we might have thought. The two dwarf galaxies, the Large and Small Magellanic Clouds, orbit the Milky Way and have bright star forming regions. It would seem that most satellite galaxies, in orbit around other big galaxies, don’t. And, taking this finding a step further, our galaxy may be one of a declining minority of galaxies still dining on gas-filled dwarf galaxies to maintain a bright and youthful appearance.

We used to think that the Sun was an ordinary, unremarkable star – but these days we should acknowledge that it’s out of statistical mid-range, since the most common stars in the visible universe are red dwarfs. Also, most stars are in binary or larger groups – unlike our apparently solitary one.

The Sun is also fortunately positioned in the Milky Way’s habitable zone – not too close-in to be constantly blasted with gamma rays, but close-in enough for there to be plenty of new star formation to seed the interstellar medium with heavy elements. And the Milky Way itself is starting to look a bit out of the ordinary. It’s quite large as spiral galaxies go, bright with active star formation – and it’s got bright satellites.

The Lambda Cold Dark Matter (CDM) model of large scale structure and galaxy formation has it that galaxy formation is a bottom-up process, with the big galaxies we see today having formed from the accretion of smaller structures – including dwarf galaxies – which themselves may have first formed upon some kind of dark matter scaffolding.

Through this building-up process, spinning spiral galaxies with bright star forming regions should become common place – only dimming if they run out of new gas and dust to feast on, only losing their structure if they collide with another big galaxy – first becoming a ‘train wreck’ irregular galaxy and then probably evolving into an elliptical galaxy.

The  Lambda CDM model suggests that other bright spiral galaxies should also be surrounded by lots of gas-filled satellite galaxies, being slowly draw in to feed their host. Otherwise how is it that these spiral galaxies get so big and bright? But, at least for the moment, that’s not what we are finding – and the Milky Way doesn’t seem to be a ‘typical’ example of what’s out there.

The relative lack of satellites observed around other galaxies could mean the era of rapidly accreting and growing galaxies is coming to a close – a point emphasised by the knowledge that we observe distant galaxies at various stages of their past lives anyway. So the Milky Way may already be a relic of a bygone era – one of the last of the galaxies still growing from the accretion of smaller dwarf galaxies.

Supernova 1987a, which exploded near the Tarantula Nebula of the Large Magellanic Cloud. Credit: Anglo-Australian Observatory.

On the other hand – maybe we just have some very unusual satellites. To a distant observer, the Large MC would have nearly a tenth of the luminosity of the Milky Way and the Small MC nearly a fortieth – we don’t find anything like this around most other galaxies. The Clouds may even represent a binary pair which is also fairly unprecedented in any current sky survey data.

They are thought to have passed close together around 2.5 billion years ago – and it’s possible that this event may have set off an extended period of new star formation. So maybe other galaxies do have lots of satellites – it’s just that they are dim and difficult to observe as they are not engaged in new star formation.

Either way, using our galaxy as a basis for modelling how other galaxies work might not be a good idea – apparently it’s not so ordinary.

Further reading: James, P. A. And Ivory C.F. On the scarcity of Magellanic Cloud-like satellites.

Astronomy Without A Telescope – One Crowded Nanosecond

Labelled version of the Planck space observatory's all-sky survey. Credit: ESA.

[/caption]

Remember how you could once pick up a book about the first three minutes after the Big Bang and be amazed by the level of detail that observation and theory could provide regarding those early moments of the universe. These days the focus is more on what happened between 1×10-36 and 1×10-32 of the first second as we try to marry theory with more detailed observations of the cosmic microwave background.

About 380,000 years after the Big Bang, the early universe became cool and diffuse enough for light to move unimpeded, which it proceeded to do – carrying with it information about the ‘surface of last scattering’. Before this time photons were being continually absorbed and re-emitted (i.e. scattered) by the hot dense plasma of the earlier universe – and never really got going anywhere as light rays.

But quite suddenly, the universe got a lot less crowded when it cooled enough for electrons to combine with nuclei to form the first atoms. So this first burst of light, as the universe became suddenly transparent to radiation, contained photons emitted in that fairly singular moment – since the circumstances to enable such a universal burst of energy only happened once.

With the expansion of the universe over a further 13.6 and a bit billion years, lots of these photons probably crashed into something long ago, but enough are still left over to fill the sky with a signature energy burst that might have once been powerful gamma rays but has now been stretched right out into microwave. Nonetheless, it still contains that same ‘surface of last scattering’ information.

Observations tell us that, at a certain level, the cosmic microwave background is remarkably isotropic. This led to the cosmic inflation theory, where we think there was a very early exponential expansion of the microscopic universe at around 1×10-36 of the first second – which explains why everything appears so evenly spread out.

However, a close look at the cosmic microwave background (CMB) does show a tiny bit of lumpiness – or anisotropy – as demonstrated in data collected by the aptly-named Wilkinson Microwave Anisotropy Probe (WMAP).

Really, the most remarkable thing about the CMB is its large scale isotropy and finding some fine grain anisotropies is perhaps not that surprising. However, it is data and it gives theorists something from which to build mathematical models about the contents of the early universe.

The apparent quadrupole moment anomalies in the cosmic microwave background might result from irregularities in the early universe - including density fluctuations, dynamic movement (vorticity) or even gravity waves. However, a degree of uncertainty and 'noise' from foreground light sources is apparent in the data, making firm conclusions difficult to draw. Credit: University of Chicago.

Some theorists speak of CMB quadrupole moment anomalies. The quadrupole idea is essentially an expression of energy density distribution within a spherical volume – which might scatter light up-down or back-forward (or variations from those four ‘polar’ directions). A degree of variable deflection from the surface of last scattering then hints at anisotropies in the spherical volume that represents the early universe.

For example, say it was filled with mini black holes (MBHs)? Scardigli et al (see below) mathematically investigated three scenarios, where just prior to cosmic inflation at 1×10-36 seconds: 1) the tiny primeval universe was filled with a collection of MBHs; 2) the same MBHs immediately evaporated, creating multiple point sources of Hawking radiation; or 3) there were no MBHs, in accordance with conventional theory.

When they ran the math, scenario 1 best fits with WMAP observations of anomalous quadrupole anisotropies. So, hey – why not? A tiny proto-universe filled with mini black holes. It’s another option to test when some higher resolution CMB data comes in from Planck or other future missions to come. And in the meantime, it’s material for an astronomy writer desperate for a story.

Further reading: Scardigli, F., Gruber,C. and Chen (2010) Black hole remnants in the early universe.

Astronomy Without A Telescope – Space Towers

(Caption) The Seattle space needle pokes through the cloud tops (well, just fog really… it's only 184 meters high). Credit: Liem Bahneman, pixduas.com

[/caption]

Arthur C Clarke allegedly said that the space elevator would be built fifty years after people stopped laughing. The first space tower though… well, that might need a hundred years. The idea of raising a structure from the ground up to 100 kilometers in height seems more than a bit implausible by today’s engineering standards, given that we are yet to build anything that is more than one kilometer in height. The idea that we could build something up to geosynchronous orbit at 36,000 kilometers in height is just plain LOL… isn’t it?

Space tower proponents point to a key problem with the space elevator design. It may only be after we have spent years inventing a method to manufacture 36,000 kilometers of flawless carbon or boron nanotube fiber – which is light enough not to break under its own weight, but still strong enough to lift an elevator cabin – that we suddenly realize that we still have to get power to the cabin’s lifting engine. And doesn’t that just mean adding 36,000 kilometers of conventional (and heavy) electrical cable to the construction?

Mind you, building a space tower brings its own challenges. It’s estimated that a steel tower, containing an elevator and cabling, of 100 kilometers height needs a cross-sectional base that is a 100 times greater than its apex and a mass that is 135 times greater than its payload (which might be a viewing platform for tourists).

A solid construction capable of holding up a launch platform at 36,000 kilometers altitude might need a tower with ten million times the mass of its payload – with a cross-sectional base covering the area of, say, Spain. And the only construction material likely to withstand the stresses involved would be industrial diamond.

A more economical approach, though no less ambitious or LOL-inducing, are centrifugal and kinetic towers. These are structures that can potentially exceed a height of 100 kilometers, support an appreciable mass at their apex and still maintain structural stability – by virtue of a rapidly rotating loop of cable which not only supports its own weight, but generates lift through centrifugal force. The rotation of the cable loop is driven by a ground-based engine, which can also drive a separate elevator cable to lift courageous tourists. Gaining altitudes of 36,000 kilometers is suggested to be achievable by staged constructions and lighter materials. But, it might be sensible to first see if this grand design on paper can translate to a proposed four kilometer test tower – and then take it from there.

There are also inflatable space towers, proposed to be capable of achieving heights of 3 kilometers with hot air, 30 kilometers with helium or even 100 kilometers with hydrogen (oh, the humanity). Allegedly, a 36,000 kilometer tower might be achievable if filled with electron gas. This is a curious substance argued to be capable of exerting different inflationary pressures depending on the charge applied to the thin-film membrane which contains it. This would allow a structure to withstand differential stresses – where, in a highly charged state, the highly excited electron gas mimics a molecular gas under high pressure, but with a reduced charge it exerts less pressure and the structure containing it becomes more flexible – although, in either case, the overall mass of the gas remains unchanged and suitably low. Hmmm…

An inflatable 100 kilometer high, 300 kilometer long space pier, built to launch spacecraft horizontally. Humans might survive the G forces required to achieve orbit - which they certainly wouldn't do if the same trajectory was attempted from sea-level. Credit: Josh Hall, autogeny.org/tower/tower.html

If this all seems a bit implausible, there’s always the proposed 100 kilometer high space pier that would enable horizontal space launch without rocketry – perhaps via a giant rail gun, or some other similarly theoretical device that works just fine on paper.

Further reading: Krinker, M. (2010) Review of new concepts, ideas and innovations in space towers. (Have to say this review reads like a cut and paste job from a number of not-very-well-translated-from-Russian articles – but the diagrams are, if not plausible, at least comprehensible).

Astronomy Without A Telescope – Galactic Gravity Lab

The center of the Milky Way containing Sagittarius A*. The black hole and several massive young stars in the chaotic region create a surrounding haze of superheated gas that shows up in X-ray light. Credit: chandra.harvard.edu and Kyoto University.

[/caption]

Many an alternative theory of gravity has been dreamt up in the bath, while waiting for a bus – or maybe over a light beverage or two. These days it’s possible to debunk (or otherwise) your own pet theory by predicting on paper what should happen to an object that is closely orbiting a black hole – and then test those predictions against observations of S2 and perhaps other stars that are closely orbiting our galaxy’s central supermassive black hole – thought to be situated at the radio source Sagittarius A*.

S2, a bright B spectral class star, has been closely observed since 1995 during which time it has completed over one orbit of the black hole, given its orbital period is less than 16 years. S2’s orbital dynamics can be expected to differ from what would be predicted by Kepler’s 3rd law and Newton’s law of gravity, by an amount that is three orders of magnitude greater than the anomalous amount seen in the orbit of Mercury. In both Mercury’s and S2’s cases, these apparently anomalous effects are predicted by Einstein’s theory of general relativity, as a result of the curvature of spacetime caused by a nearby massive object – the Sun in Mercury’s case and the black hole in S2’s case.

S2 travels at an orbital speed of about 5,000 kilometers per second – which is nearly 2% of the speed of light. At the periapsis (closest-in point) of its orbit, it is thought to come within 5 billion kilometres of the Schwarzschild radius of the supermassive blackhole, being the boundary beyond which light can no longer escape – and a point we might loosely regard as the surface of the black hole. The supermassive black hole’s Schwarzschild radius is roughly the distance from the Sun to the orbit of Mercury – and at periapsis, S2 is roughly the same distance away from the black hole as Pluto is from the Sun.

The supermassive black hole is estimated to have a mass of roughly four million solar masses, meaning it may have dined upon several million stars since its formation in the early universe – and meaning that S2 only manages to cling on to existence by virtue of its stupendous orbital speed – which keeps it falling around, rather than falling into, the black hole. For comparison, Pluto stays in orbit around the Sun by maintaining a leisurely orbital speed of nearly 5 kilometers per second.

Some astrometrics of S2's orbit around the supermassive black hole Sagittarius A* at the center of the Milky Way. Credit: Schödel et al (2002), published in Nature.

The detailed data set of S2’s astrometric position (right ascension and declination) changes over time – and from there, its radial velocity calculated at different points along its orbit – provides an opportunity to test theoretical predictions against observations.

For example, with these data, it’s possible to track various non-Keplerian and non-Newtonian features of S2’s orbit including:

– the effects of general relativity (from a external frame of reference, clocks slow and lengths contract in stronger gravity fields). These are features expected from orbiting a classic Schwarzschild black hole;
– the quadrapole mass moment (a way of accounting for the fact that the gravitational field of a celestial body may not be quite spherical due to its rotation). These are additional features expected from orbiting a Kerr black hole – i.e. a black hole with spin; and
– dark matter (conventional physics suggests that the galaxy should fly apart given the speed it’s rotating at – leading to the conclusion that there is more mass present than meets the eye).

But hey, that’s just one way of interpreting the data. If you want to test out some alternative theories – like, say Oceanic String Space Theory – well, here’s your chance.

Further reading: Iorio, L. (2010) Long-term classical and general relativistic effects on the radial velocities of the stars orbiting Sgr A*.

Astronomy Without A Telescope – A Universe Free Of Charge?

(Caption) When you weigh up all the positives and the negatives, does the universe still have a net charge of zero?

[/caption]

If there were equal amounts of matter and anti-matter in the universe, it would be easy to deduce that the universe has a net charge of zero, since a defining ‘opposite’ of matter and anti-matter is charge. So if a particle has charge, its anti-particle will have an equal but opposite charge. For example, protons have a positive charge – while anti-protons have a negative charge.

But it’s not apparent that there is a lot of anti-matter around as neither the cosmic microwave background, nor the more contemporary universe contain evidence of annihilation borders – where contact between regions of large scale matter and large scale anti-matter should produce bright outbursts of gamma rays.

So, since we do apparently live in a matter-dominated universe – the question of whether the universe has a net charge of zero is an open question.

It’s reasonable to assume that dark matter has either a net zero charge – or just no charge at all – simply because it is dark. Charged particles and larger objects like stars with dynamic mixtures of positive and negative charges, produce electromagnetic fields and electromagnetic radiation.

So, perhaps we can constrain the question of whether the universe has a net charge of zero to just asking whether the total sum of all non-dark matter has. We know that most cold, static matter – that is in an atomic, rather than a plasma, form – should have a net charge of zero, since atoms have equal numbers of positively charged protons and negatively charged electrons.

Stars composed of hot plasma might also be assumed to have a net charge of zero, since they are the product of accreted cold, atomic material which has been compressed and heated to create a plasma of dissociated nuclei (+ve) and electrons (-ve).

The principle of charge conservation (which is accredited to Benjamin Franklin) has it that the amount of charge in a system is always conserved, so that the amount flowing in will equal the amount flowing out.

Apollo 15's Lunar Surface Experiments Package (ALSEP). The Moon represents a good vantage point to measure the balance of incoming cosmic rays versus outgoing solar wind.

An experiment which has been suggested to enable measurement of the net charge of the universe, involves looking at the solar system as a charge-conserving system, where the amount flowing in is carried by charged particles in cosmic rays – while the amount flowing out is carried by charged particles in the Sun’s solar wind.

If we then look at a cool, solid object like the Moon, which has no magnetic field or atmosphere to deflect charged particles, it should be possible to estimate the net contribution of charge delivered by cosmic rays and by solar wind. And when the Moon is shadowed by the tail of the Earth’s magnetosphere, it should be possible to detect the flux attributable to just cosmic rays – which should represent the charge status of the wider universe.

Drawing on data collected from sources including Apollo surface experiments, the Solar and Heliospheric Observatory (SOHO), the WIND spacecraft and the Alpha Magnetic Spectrometer flown on a space shuttle (STS 91), the surprising finding is a net overbalance of positive charges arriving from deep space, implying that there is an overall charge imbalance in the cosmos.

Either that or a negative charge flux occurs at energy levels lower than the threshold of measurement that was achievable in this study. So perhaps this study is a bit inconclusive, but the question of whether the universe has a net charge of zero still remains an open question.

Further reading: Simon, M.J. and Ulbricht, J. (2010) Generating an electrical potential on the Moon by cosmic rays and solar wind?

Astronomy Without A Telescope – Alchemy By Supernova

(Caption) Supernova remnant G1.9+0.3 (Combined image from Chandra Xray data and radio data from NRAO's Very Large Array). Credit: http://chandra.harvard.edu

[/caption]

The production of elements in supernova explosions is something we take for granted these days. But exactly where and when this nucleosynthesis takes place is still unclear – and attempts to computer model core collapse scenarios still pushes current computing power to its limits.

Stellar fusion in main sequence stars can build some elements up to, and including, iron. Further production of heavier elements can also take place by certain seed elements capturing neutrons to form isotopes. Those captured neutrons may then undergo beta decay leaving behind one or more protons which essentially means you have a new element with a higher atomic number (where atomic number is the number of protons in a nucleus).

This ‘slow’ process or s-process of building heavier elements from, say, iron (26 protons) takes place most commonly in red giants (making elements like copper with 29 protons and even thallium with 81 protons).

But there’s also the rapid or r-process, which takes place in a matter of seconds in core collapse supernovae (being supernova types 1b, 1c and 2). Rather than the steady, step-wise building over thousands of years seen in the s-process – seed elements in a supernova explosion have multiple neutrons jammed in to them, while at the same time being exposed to disintegrating gamma rays. This combination of forces can build a wide range of light and heavy elements, notably very heavy elements from lead (82 protons) up to plutonium (94 protons), which cannot be produced by the s-process.

How stuff gets made in our universe. The white elements (above plutonium) can be formed in a laboratory, but it is unclear whether they form naturally - and, in any case, they decay quickly after they are formed. Credit: North Arizona University

Prior to a supernova explosion, the fusion reactions in a massive star progressively run through first hydrogen, then helium, carbon, neon, oxygen and finally silicon  – from which point an iron core develops which can’t undergo further fusion. As soon as that iron core grows to 1.4 solar masses (the Chandrasekhar limit) it collapses inwards at nearly a quarter of the speed of light as the iron nuclei themselves collapse.

The rest of the star collapses inwards to fill the space created but the inner core ‘bounces’ back outwards as the heat produced by the initial collapse makes it ‘boil’. This creates a shockwave – a bit like a thunderclap multiplied by many orders of magnitude, which is the beginning of the supernova explosion. The shock wave blows out the surrounding layers of the star – although as soon as this material expands outwards it also begins cooling. So, it’s unclear if r-process nucleosynthesis happens at this point.

But the collapsed iron core isn’t finished yet. The energy generated as the core compressed inwards disintegrates many iron nuclei into helium nuclei and neutrons. Furthermore, electrons begin to combine with protons to form neutrons so that the star’s core, after that initial bounce, settles into a new ground state of compressed neutrons – essentially a proto-neutron star. It is able to ‘settle’ due to the release of a huge burst of neutrinos which carries heat away from the core.

It’s this neutrino wind burst that drives the rest of the explosion. It catches up with, and slams into, the already blown-out ejecta of the progenitor star’s outer layers, reheating this material and adding momentum to it. Researchers (below) have proposed that it is this neutrino wind impact event (the ‘reverse shock’) that is the location of the r-process.

It’s thought that the r-process is probably over within a couple of seconds, but it could still take an hour or more before the supersonic explosion front bursts through the surface of the star, delivering some fresh contributions to the periodic table.

Further reading: Arcones A. and Janka H. Nucleosynthesis-relevant conditions in neutrino-driven supernova outflows. II. The reverse shock in two-dimensional simulations.

And, for historical context, the seminal paper on the subject (also known as the B2FH paper) E. M. Burbidge, G. R. Burbidge, W. A. Fowler, and F. Hoyle. (1957). Synthesis of the Elements in Stars. Rev Mod Phy 29 (4): 547. (Before this nearly everyone thought all the elements formed in the Big Bang – well, everyone except Fred Hoyle anyway).

Astronomy Without A Telescope – Strange Stars

(Caption) One step closer to a black hole? A hypothetical strange star results from extreme gravitational compression overcoming the strong interaction that holds neutrons and protons together. Credit Swinburne University - astronomy.swin.edu.au

[/caption]

Atoms are made of protons, neutrons and electrons. If you cram them together and heat them up you get plasma where the electrons are only loosely associated with individual nuclei and you get a dynamic, light-emitting mix of positively charged ions and negatively charged electrons. If you cram that matter together even further, you drive electrons to merge with protons and you are left with a collection of neutrons – like in a neutron star. So, what if you keep cramming that collection of neutrons together into an even higher density? Well, eventually you get a black hole – but before that (at least hypothetically) you get a strange star.

The theory has it that compressing neutrons can eventually overcome the strong interaction, breaking down a neutron into its constituent quarks, giving a roughly equal mix of up, down and strange quarks – allowing these particles to be crammed even closer together in a smaller volume. By convention, this is called strange matter. It has been suggested that very massive neutron stars may have strange matter in their compressed cores.

However, some say that strange matter has a more fundamentally stable configuration than other matter. So, once a star’s core becomes strange, contact between it and baryonic (i.e. protons and neutrons) matter might drive the baryonic matter to adopt the strange (but more stable) matter configuration. This is the sort of thinking behind why the Large Hadron Collider might have destroyed the Earth by producing strangelets, which then produce a Kurt Vonnegut Ice-9 scenario. However, since the LHC hasn’t done any such thing, it’s reasonable to think that strange stars probably don’t form this way either.

More likely a ‘naked’ strange star, with strange matter extending from its core to its surface, might evolve naturally under its own self gravity. Once a neutron star’s core becomes strange matter, it should contract inwards leaving behind volume for an outer layer to be pulled inwards into a smaller radius and a higher density, at which point that outer layer might also become strange… and so on. Just as it seems implausible to have a star whose core is so dense that it’s essentially a black hole, but still with a star-like crust – so it may be that when a neutron star develops a strange core it inevitably becomes strange throughout.

Anyhow, if they exist at all, strange stars should have some tell tale characteristics. We know that neutron stars tend to lie in the range of 1.4 to 2 solar masses – and that any star with a neutron star’s density that’s over 10 solar masses has to become a black hole. That leaves a bit of a gap – although there is evidence of stellar black holes down to only 3 solar masses, so the gap for strange stars to form may only be in that 2 to 3 solar masses range.

By adopting a more compressed 'ground state' of matter, a strange (quark) star should be smaller, but more massive, than a neutron star. RXJ1856 is in the ballpark for size, but may not be massive enough to fit the theory. Credit: chandra.harvard.edu

The likely electrodynamic properties of strange stars are also of interest (see below). It is likely that electrons will be displaced towards the surface – leaving the body of the star with a nett positive charge surrounded by an atmosphere of negatively charged electrons. Presuming a degree of differential rotation between the star and its electron atmosphere, such a structure would generate a magnetic field of the magnitude that can be observed in a number of candidate stars.

Another distinct feature should be a size that is smaller than most neutron stars. One strange star candidate is RXJ1856, which appears to be a neutron star, but is only 11 km in diameter. Some astrophysicists may have muttered hmmm… that’s strange on hearing about it – but it remains to be confirmed that it really is.

Further reading: Negreiros et al (2010) Properties of Bare Strange Stars Associated with Surface Electrical Fields.