The Mars Curse

beagle2.thumbnail.jpg

Admittedly, Mars has drawn more space missions than the rest of the Solar System’s planets, but why have nearly two thirds of all Mars missions failed in some way? Is the “Galactic Ghoul” or the “Mars Triangle” real? Or is it a case of technological trial-and-error? In any case, the Mars Curse has been a matter of debate for many years, but recent missions to the Red Planet haven’t only reached their destination, they are surpassing our wildest expectations. Perhaps our luck is changing…

In 1964, NASA’s Mariner 3 was launched from Cape Canaveral Air Force Station. In space, its solar panels failed to open and the batteries went flat. Now it’s orbiting the Sun, dead. In 1965, Russian controllers lost contact with Zond 2 after it lost one of its solar panels. It lifelessly floated past Mars in the August of that year, only 1,500 km away from the planet. In March and April, 1969, the twin probes in the Soviet Mars 1969 program both suffered launch failure, 1969A exploded minutes after launch and 1969B took a U-turn and crashed to earth. More recently, NASA’s Mars Climate Orbiter crashed into the Red Planet in 1999 after an embarrassing measurement unit mix-up caused the satellite to enter the atmosphere too low. On Christmas 2003, the world waited for a signal from the UK Mars lander, Beagle 2, after it separated from ESA’s Mars Express. To this day, there’s been no word.

Looking over the past 48 years of Mars exploration, it makes for sad reading. A failed mission here, a “lost” mission there, with some unknowns thrown in for good measure. It would seem that mankind’s efforts to send robots to Mars have been thwarted by bad luck and strange mysteries. Is there some kind of Red Planet Triangle (much like the Bermuda Triangle), perhaps with its corners pointing to Mars, Phobos and Deimos? Is the Galactic Ghoul really out there devouring billions of dollars-worth of hardware?

The strange-looking DR 6 nebula as observed by the Spitzer telescope - well, it could be the face of the Galactic Ghoul… (credit: NASA)

The “Galactic Ghoul” has been mentioned jokingly by NASA scientists to describe the misfortune of space missions, particularly Mars missions. Looking at the statistics of failed missions, you can’t help but think that there are some strange forces at play. During NASA’s Mars Pathfinder mission, there was a technical hitch as the airbags were deflated after the rover mission landed in 1998, prompting one of the rover scientists to mention that perhaps the Galactic Ghoul was beginning to rear its ugly head:

The great galactic ghoul had to get us somewhere, and apparently the ghoul has decided to pick on the rover.” – Donna Shirley, JPL’s Mars program manager and Sojourner’s designer, in an interview in 1997

Well, there are plenty of answers that explain the losses of these early forays to Mars, putting the Galactic Ghoul to one side for now.

Beginning with the very first manmade objects to land on the Martian surface, Mars 2 and Mars 3, Soviet Union-built Mars lander/orbiter missions in 1971. The lander from Mars 2 is famous for being the first ever robotic explorer on the surface of Mars, but it is also infamous for making the first manmade crater on the surface of Mars. The Mars 3 lander had more luck, it was able to make a soft landing and transmit a signal back to Earth… for 20 seconds. After that, the robot was silenced.

The first rover to land on Mars - Made in Russia (credit: Planetary Society)

Both landers had the first generation of Mars rovers on board; tethered to the landing craft, they would have had a range of 15 meters from the landing site. Alas, neither was used. It is thought that the Mars 3 lander was blown over by one of the worst dust storms observed on Mars.

To travel from Earth to Mars over a long seven months, separate from its orbiter, re-enter the Martian atmosphere and make a soft landing was a huge technological success in itself – only to get blown over by a dust storm is the ultimate example of “bad luck” in my books! Fortunately, both the Mars 2 and 3 orbiters completed their missions, relaying huge amounts of data back to Earth.

The ill-fated NASA Mars Observer before launch (credit: NASA)

This isn’t the only example where “bad luck” and “Mars mission” could fall into the same sentence. In 1993, NASA’s Mars Observer was only three days away from orbital insertion around Mars when it stopped transmitting. After a very long 337 day trip from Earth it is thought that on pressurizing the fuel tanks in preparation for its approach, the orbiters propulsion system started to leak monomethyl hydrazine and helium gas. The leakage caused the craft to spin out of control, switching its electronics into “safe” mode. There was to be no further communication from Mars Observer.

Human error also has a part to play in many of the problems with getting robots to the Red Planet. Probably the most glaring, and much hyped error was made during the development of NASA’s Mars Climate Orbiter. In 1999, just before orbital insertion, a navigation error sent the satellite into an orbit 100 km lower than its intended 150 km altitude above the planet. This error was caused by one of the most expensive measurement incompatibilities in space exploration history. One of NASA’s subcontractors, Lockheed Martin, used Imperial units instead of NASA-specified metric units. This incompatibility in the design units culminated in a huge miscalculation in orbital altitude. The poor orbiter plummeted through the Martian atmosphere and burned up.

An artists impression of the Mars Climate Orbiter (credit: NASA)

Human error is not only restricted to NASA missions. The earlier Russian Phobos 1 mission in 1988 was lost through a software error. Neglecting a programming subroutine that should never have been used during space flight was accidentally activated. The subroutine was known about before the launch of Phobos 1, but engineers decided to leave it, repairing it would require the whole computer to be upgraded. Due to the tight schedule, the spaceship was launched. Although deemed “safe”, the software was activated and the probe was sent into a spin. With no lock on the Sun to fuel its solar panels, the satellite was lost.

The Russian Phobos 1 mission to probe Mars and moon Phobos (credit: NASA)

To date, 26 of the 43 missions to Mars (that’s a whopping 60%) have either failed or only been partially successful in the years since the first Marsnik 1 attempt by the Soviet Union in 1960. In total the USA/NASA has flown 20 missions, six were lost (70% success rate); the Soviet Union/Russian Federation flew 18, only two orbiters (Mars 2 and 3) were a success (11% success rate); the two ESA missions, Mars Express, and Rosetta (fly-by) were both a complete success; the single Japanese mission, Nozomi, in 1998 suffered complications en-route and never reached Mars; and the British lander, Beagle 2, famously went AWOL in 2003.

Despite the long list of failed missions, the vast majority of lost missions to Mars occurred during the early “pioneering” years of space exploration. Each mission failure was taken on board and used to improve the next and now we are entering an era where mission success is becoming the “norm”. NASA currently has two operational satellites around Mars, Mars Odyssey and the Mars Reconnaissance Orbiter. The European Mars Express is also in orbit.

The Mars Exploration Rovers Spirit and Opportunity continue to explore the Martian landscape as their mission keeps on getting extended.

Recent mission losses, such as the British Beagle 2, are inevitable when we look at how complex and challenging sending robotic explorers into the unknown. There will always be a degree of human error, technology failure and a decent helping of bad fortune, but we seem to be learning from our mistakes and moving forward. There definitely seems to be an improving trend toward mission success over mission failure.

Perhaps, with technological advancement and a little bit of luck, we are overcoming the Mars Curse and keeping the Galactic Ghoul at bay as we gradually gain a strong foothold on a planet we hope to colonize in the not-so-distant future

See that Record Breaking Gamma Ray Burst Go! (Video)

grb.thumbnail.jpg

No sooner had NASA’s Swift X-Ray Telescope caught the record-breaking Gamma Ray Burst (GRB) in the act on Wednesday (March 19th), the worlds telescopes swung toward the constellation of Boötes to watch the afterglow of this massive explosion. One instrument in a Chile observatory was observing in Swift’s field of view at the time of the blast and has put together a short frame-by-frame video of the event. So if you missed this historic burst from 7.5 billion years ago (which you probably did!) you can watch it now…

Las Campanas Observatory is located high in the Chilean mountains and was used to observe the afterglow of the massive GRB observed at 2:12 am (EDT) last Wednesday. The Polish instrument called “Pi of the Sky”, a GRB detector array of cameras looking out for optical flashes (or transients) in the night sky. This ground-based instrument was lucky. Taking continuous shots in its wide field of view, the instrument’s automatic flash recognition algorithm detected the explosion two seconds before Swift’s Burst Alert Telescope (BAT). The Polish research group has released the chain of events in the form of an animation with frames 10 seconds apart (shown below). The blast decayed from the brightness of a 5 magnitude star to 11th magnitude over four minutes, allowing it to be seen by the naked eye when it was at its brightest.

One of the most significant results to come out of this multi-instrument observation of this event is that with 10 seconds of precision, the optical emission and gamma-ray emission from a GRB are simultaneous.

The “Pie in the Sky” project is unique in that it surveys the sky on the lookout for GRBs without depending on satellites. It does however use satellite indicators of GRB flashes to confirm its observations. By observing such a wide field of view, taking continuous 10s-interval shots of the sky, the instrument can observe the GRB in the very early stages of the blast.

GRBs are of massive interest to scientists. Generally, GRBs lasting for longer than two seconds are attributed to massive stars collapsing and forming black holes. Therefore observing the first two minutes of the blast and afterglow provides valuable information about black hole formation.

Source: Pie of the Sky

Biggest Ever Cosmic Explosion Observed 7.5 Billion Light Years Away

gamma.thumbnail.jpg

A record-breaking gamma ray burst was observed yesterday (March 19th) by NASA’s Swift satellite. After red-shift observations were analysed, astronomers realized they were looking at an explosion half-way across the Universe, some 7.5 billion light years away. This means that the burst occurred 7.5 billion years ago, when the Universe was only half the age it is now. This shatters the record for the most distant object that can be seen with the naked eye…

Gamma ray bursts (GRBs) are the most powerful explosions observed in the Universe, and the most powerful explosions to occur since the Big Bang. A GRB is generated during the collapse of a massive star into a black hole or neutron star. The physics behind a GRB is highly complex, but the most accepted model is that as a massive star collapses to form a black hole, the in falling material is energetically converted into a blast of high energy radiation. It is thought the burst is highly collimated from the poles of the collapsing star. Any local matter downstream of the burst will be vaporized. This has led to the thought that historic terrestrial extinctions over the last hundreds of millions of years could be down to the Earth being irradiated by gamma radiation from such a blast within the Milky Way. But for now, all GRBs are observed outside our galaxy, out of harms way.

An artists impression of gamma ray burst (credit: Stanford.edu)

This record-breaking GRB was observed by the Swift observatory (launched into Earth orbit in 2004) which surveys the sky for GRBs. Using its Burst Alert Telescope (BAT), the initiation of an event can be relayed to Earth within 20 seconds. Once located, the spacecraft turns all its instruments toward the burst to measure the spectrum of light emitted from the afterglow. This observatory is being used to understand how GRBs are initiated and how the hot gas and dust surrounding the event evolves.

“This burst was a whopper; it blows away every gamma ray burst we’ve seen so far.” – Neil Gehrels, Swift principal investigator, NASA Goddard Space Flight Center, Greenbelt, Md.

This particular GRB was observed in the constellation of Boötes at 2:12 a.m. (EDT), March 19th. Telescopes on the ground and in space quickly turned to Boötes to analyse the afterglow of the burst. Later in the day, the Very Large Telescope in Chile and the Hobby-Eberly Telescope in Texas measured the burst’s redshift at 0.94. From this measure, scientists were able to pinpoint our distance from the explosion. This red shift corresponds to a distance of 7.5 billion light years, signifying that this huge GRB happened 7.5 billion years ago, over half the distance across the observable universe.

Source: NASA

Could Cosmic Rays Influence Global Warming?

sunset.thumbnail.jpg

The idea goes like this: Cosmic rays, originating from outside the Solar System, hit the Earth’s atmosphere. In doing so these highly energetic particles create microscopic aerosols. Aerosols collect in the atmosphere and act as nuclei for water droplet formation. Large-scale cloud cover can result from this microscopic interaction. Cloud cover reflects light from the Sun, therefore cooling the Earth. This “global dimming” effect could hold some answers to the global warming debate as it influences the amount of radiation entering the atmosphere. Therefore the flux of cosmic rays is highly dependent on the Sun’s magnetic field that varies over the 11-year solar cycle.

If this theory is so, some questions come to mind: Is the Sun’s changing magnetic field responsible for the amount of global cloud cover? To what degree does this influence global temperatures? Where does that leave man-made global warming? Two research groups have published their work and, perhaps unsurprisingly, have two different opinions…


I always brace myself when I mention “global warming”. I have never come across such an emotive and controversial subject. I get comments from people that support the idea that the human race and our insatiable desire for energy is the root cause of the global increases in temperature. I get anger (big, scary anger!) from people who wholeheartedly believe that we are being conned into thinking the “global warming swindle” is a money-making scheme. You just have to look at the discussions that ensued in the following climate-related stories:

But what ever our opinion, huge quantities of research spending is going into understanding all the factors involved in this worrying upward trend in average temperature.

Cue cosmic rays.

Researchers from the National Polytechnic University in the Ukraine take the view that mankind has little or no effect on global warming and that it is purely down to the flux of cosmic radiation (creating clouds). Basically, Vitaliy Rusov and colleagues run the analysis of the situation and deduce that the carbon dioxide content of the atmosphere has very little effect on global warming. Their observations suggest that global temperature increases are periodic when looking into the history of global and solar magnetic field fluctuations and the main culprit could be cosmic ray interactions with the atmosphere. Looking back over 750,000 years of palaeotemperature data (historic records of climatic temperature stored in ice cores sampled in the Northern Atlantic ice sheets), Rusov’s theory and data analysis draw the same conclusion, that global warming is periodic and intrinsically linked with the solar cycle and Earth’s magnetic field.

But how does the Sun affect the cosmic ray flux? As the Sun approaches “solar maximum” its magnetic field is at its most stressed and active state. Flares and coronal mass ejections become commonplace, as do sunspots. Sunspots are a magnetic manifestation, showing areas on the solar surface where the powerful magnetic field is up welling and interacting. It is during this period of the 11-year solar cycle that the reach of the solar magnetic field is most powerful. So powerful that galactic cosmic rays (high energy particles from supernovae etc.) will be swept from their paths by the magnetic field lines en-route to the Earth in the solar wind.

It is on this premise that the Ukrainian research is based. Cosmic ray flux incident on the Earth’s atmosphere is anti-correlated with sunspot number – less sunspots equals an increase in cosmic ray flux. And what happens when there is an increase in cosmic ray flux? There is an increase in global cloud cover. This is the Earth’s global natural heat shield. At solar minimum (when sunspots are rare) we can expect the albedo (reflectivity) of the Earth to increase, thus reducing the effect of global warming.

This is a nice bit of research, with a very elegant mechanism that could physically control the amount of solar radiation heating the atmosphere. However, there is a lot of evidence out there that suggests carbon dioxide emissions are to blame for the current upward trend of average temperature.

Prof. Terry Sloan and Prof. Sir Arnold Wolfendale from the University of Lancaster and University of Durham, UK step into the debate with the publication “Testing the proposed causal link between cosmic rays and cloud cover“. Using data from the International Satellite Cloud Climatology Project (ISCCP), the UK-based researchers set out to investigate the idea that the solar cycle has any effect on the amount of global cloud cover. They find that cloud cover varies depending on latitude, demonstrating that in some locations cloud cover/cosmic ray flux correlates in others it does not. The big conclusion from this comprehensive study states that if cosmic rays in some way influence cloud cover, at maximum the mechanism can only account for 23 percent of cloud cover change. There is no evidence to suggest that changes in the cosmic ray flux have any effect on global temperature changes.

The cosmic-ray, cloud-forming mechanism itself is even in doubt. So far, there has been little observational evidence of this phenomenon. Even looking at historical data, there has never been an accelerated increase in global temperature rise than the one we are currently observing.

So could we be clutching at straws here? Are we trying to find answers to the global warming problem when the answer is already right in front of us? Even if global warming can be amplified by natural global processes, mankind sure ain’t helping. There is a known link between carbon dioxide emission and global temperature rise whether we like it or not.

Perhaps taking action on carbon emissions is a step in the right direction while further research is carried out on some of the natural processes that can influence climate change, as for now, cosmic rays do not seem to have a significant part to play.

Original source: arXiv blog

ATV Jules Verne Reaches “Parking Orbit” 2000km from ISS

atv_iss_approach.thumbnail.jpg

Peering across 2000 km of space, the Automated Transfer Vehicle (ATV), “Jules Verne”, leads the orbit of the International Space Station (ISS). The ISS will now be a speck on the ATV’s horizon, but only hours earlier, it completed a fly-by 30 km underneath, giving the station and space shuttle Endeavour crew a look of the precious cargo shipment. Jules Verne will now sit and wait in “parking orbit” until the coast is clear for the ATV to dock early next month…

In an ultimate fly-by, the Jules Verne shot past the ISS 30 km below its orbit. A few thruster blasts later and the robotic vehicle had reached its parking orbit, 2000 km in front of the ISS. A photo was apparently taken by the ISS’s robotic arm, but the zoom wasn’t powerful enough to get any detail of the craft as it passed.

The ATV must now wait for Endeavour to finish its mission before it can approach the station. Jules Verne has passed all mission requirements so far, but it still has a few “practice runs” to carry out before it will be cleared for docking. On the 29th and 31st of March the vehicle will carry out two mock docking procedures in preparation for the real event on April 3rd.

The ATV successfully completed the Collision Avoidance Manoeuvre on March 16th, so a fail-safe docking procedure is known to be working correctly.

The ATV’s second propulsion chain was used to complete today’s manoeuvres into parking orbit and all propulsion systems seem to be fully operational. Alberto Novelli, ESA’s Mission Director at the ATV Control Centre in Toulouse, France, added:

In doing the boosts we have tested all the pressure regulators and that worked perfectly fine. So as of today we have the proof that the propulsion system as a whole, including all the redundancies, is working fine.” – Novelli.

So the excitement continues to build for Europe’s first fully automated ISS 20 tonne supply vehicle as it patiently awaits its turn to dock with the station.

Source: ESA

Gravity Waves in the Atmosphere can Energize Tornados (Video)

Gravity waves are global events. Much like the ripples on a massive pond, these large-scale waves can propagate from an atmospheric disturbance over thousands of miles. These waves are maintained by the gravitational force of Earth pulling down and the buoyancy of the atmosphere pushing up. Until now it has been hard to link atmospheric gravity waves with other atmospheric phenomena, but new research suggests that gravity waves passing over storms can spin up highly dangerous and damaging tornados… Suddenly gravity waves become very important and may help to forecast where and when tornados may strike…

In a nutshell, meteorologist Tim Coleman of the National Space Science and Technology Center in Huntsville (Alabama) sums up what gravity waves are:

They are similar to waves on the surface of the ocean, but they roll through the air instead of the water. Gravity is what keeps them going. If you push water up and then it plops back down, it creates waves. It’s the same with air.” – Coleman

A large number of things may cause gravity waves (not to be confused with gravitational waves, the ripples in space-time), including intense disturbances caused by storm systems, a sudden change in jet stream location or wind shear. The strong oscillation will then travel for hundreds or even thousands of miles.

Still from a movie of a gravity wave passing over Tama, Iowa in 2006 (credit: Iowa Environmental Mesonet Webcam)

See a gravity wave in action over Iowa…

Far from gravity waves being a mild curiosity, it seems that they have a large part to play with other atmospheric dynamics down here on the ground.

Tim Coleman and colleagues have found that the passage of gravity waves over the top of storms could intensify or even create tornados. It is all down to the angular momentum of the spinning storm. When storms are large, they slowly rotate. If for some reason they shrink in scale, the spin will increase (imagine an ice-skater spinning on the ice with her arms outstretched, as she brings her arms in, she spins faster). This is the fundamental rule of angular momentum conservation, as the size of a storm contracts, the faster it spins. Ultimately, if the conditions are right, intense tornados can be generated, a huge amount of angular momentum in a tiny volume.

Now gravity waves are believed to have a part to play. As they pass over a storm, the pressure of the overlying gravity wave propagation will compress the storm. As this occurs, a vast amount of angular momentum is forced into a smaller volume. The seeding of baby tornados is therefore possible. Gravity waves also come in sets; one wave will follow another, each periodically compressing the storm, intensifying tornado generation.

So keep your eyes peeled for incoming gravity waves during a storm… tornados may spin to life…

Source: NASA

Astrium Unveils New Spaceship Plans (Video Simulation & Pictures)

Europe’s leading spacecraft manufacturer EADS Astrium, the builders of the Ariane rocket (that launches many of Europe’s space missions), has announced plans to mass produce the next generation of space planes. Developing the design of a single-stage “rocket plane”, the company believes there will be a demand for 10 spacecraft per year when the space tourism idea “takes off”. Astrium won’t be running tourist trips themselves; they will simply supply the hardware to space tourism companies predicting the industry will progress along the same lines of a classical aeronautical business model. Astrium has even released an excellent and inspiring (and realistic!) promotional video simulation of the spacecraft launch and view of space…

The Astrium Jet takes off like a conventional aircraft, artists impression (credit: Astrium/Marc Newson Ltd.)
Astrium has big plans. As space tourism companies begin to emerge, like Richard Branson’s Virgin Galactic, the technology capable of taking tourists above 100 km into the threshold of space is developing at an accelerated rate.

At first glance, the new Astrium concept looks just like a conventional jet, but this aircraft is different. For the first part of the journey high into Earth’s atmosphere, the spacecraft uses conventional jets (that require oxygen to function). At about 12 km, the jets will be rendered useless as atmospheric oxygen begins to thin out. At this point rocket engines, supplied by onboard tanks of oxygen and methane, will rumble into operation blasting the craft vertically into space at high velocity. The spacecraft will have covered 60 km in 80 seconds and will have enough momentum to continue into space, breaching the 100 km “lower limit” of space.

The Astrium rocket blasts the craft from 12km to 100km into space - artist impression (credit: Astrium/Marc Newson)

Watch the Astrium simulation of a trip on board the spacecraft.

Astrium forecasts a healthy market for their space planes, and although it won’t be in the same league as Boeing or Airbus, it will be a big step for space tourism.

One of the big players in the space tourism market will be Virgin Galactic. Virgin’s business plan is to sell tourist flights as well as develop and maintain their own spacecraft (by partnering with Burt Rutan’s Scaled Composites). Astrium’s plans are a lot simpler. They will manufacture the space planes and sell them to space tourism companies. Assuming a similar pattern to classical aerospace business models, there could be many tourist carriers using the same Astrium-class spacecraft.

It will develop towards a classical aeronautical business model. Someone will build the planes; somebody will operate them; somebody will sell the tickets; somebody will provide the accommodation – like any tourism.” – Robert Laine, chief technical officer (CTO) of EADS (Astrium)

The Astrium craft in space - artist impression (credit: Astrium/Marc Newson)

Speaking in London at the Institution of Engineering and Technology, delivering the 99th Kelvin Lecture, Robert Laine, CTO of EADS (Astrium), outlined Astrium’s plan for the future. According to Laine, Astrium’s new space plane is developing quickly, and the aerodynamic structure is undergoing final wind tunnel tests. The Romeo rocket engine has been successful in advanced tests, and has run for 31 seconds. To provide the craft with enough boost to leave the Earth’s atmosphere, it will need to burn for 80 seconds. The oxygen-methane fuel engine will give the spacecraft a high enough velocity (1 km/s) to exit the atmosphere.

Weightlessness inside the Astrium spaceship - artist impression (credit: Astrium/Marc Newson)

About 50% of the starting mass of the plane will be fuel. The preliminary design will have enough room for five people – four tourists, one pilot.

Ultimately the Astrium design is hoped to have a lifetime of 10 years and will be easy to maintain. What makes this design even more interesting is its conventional take-off and landing, plus there is no requirement for a launch vehicle. The craft could be used in conventional airports, but Astrium believes custom-made spaceports will be a better solution to avoid busy air traffic. Laine believes that the Astrium spacecraft can be fully operational within five years of a financing deal being signed.

The spacecraft begins its descent to Earth (credit: Astrium/Marc Newson)

Although weightlessness is only likely to be three minutes long, the two hour round trip will certainly be exhilarating. The three-G acceleration as the rocket engines kick in will be worth the trip alone!

Keep an eye on Astrium, they may be a close second to manufacturing a space tourist craft after Richard Branson…

Source: BBC

Why are Saturn’s Rings Disappearing?

saturn_rings.thumbnail.jpg

Astronomers have noticed a change on Saturn. The planet’s rings are getting thinner and thinner and the details in the dark bands are getting harder to observe. What’s more, at this rate, Saturn’s rings will have completely vanished by Sept. 4, 2009!

But don’t pack up your telescopes quite yet, there’s no reason to be alarmed. This phenomenon occurs every 14 to 15 years and the explanation is down to an astronomical optical illusion called “ring plane crossing”…

In 1612, Galileo noticed something was awry with the beautiful gas giant. The distinctive rings of Saturn were shrinking until he was unable to see them any more. The situation was so strange that Galileo even stopped observing the planet (most likely through frustration!). He had discovered the rings two years earlier and was instantly entranced by them. He once wrote to his Medici patrons on the discovery in 1610: “I found another very strange wonder, which I should like to make known to their Highnesses…” so you can imagine his confusion when the rings slipped out of view.

Hubble Space Telescope observation of the side-on view of Saturn's rings during the last ring plane crossing in 1995 (credit: NASA/HST)

Ring plane crossings occur periodically when the tilt and position in Saturn’s orbit combine to allow astronomers a unique side-on view of the rings. Far from being a loss, looking at the paper-thin rings side-on will remove the glare from the bright rings giving astronomers a superb opportunity to see the icy moons orbiting close to Saturn. Also, Saturn’s strangely blue north pole should be observable. Saturn is better known for its brown-golden clouds of gas, but in high latitudes, these clouds thin out to reveal a blue dome. Cancelling the light from Saturn’s rings may provide a perfect environment to see the blue from Earth and to view the points of bright light shining off the small moons.

So dust off those telescopes, a once-in-14-year astronomical opportunity is approaching…

Source: NASA

A Step Toward Quantum Communications with Space

egs.thumbnail.jpg

Sending quantum information in the form of qubits (quantum bits) have been successfully carried out for years. Firing indecipherable packets of quantum data (or quantum states) via photons can however degrade the message as the photons travel through the dense atmosphere. Also, the distance of transmitting data is severely hindered by other factors such as the curvature of the Earth. Now, for the first time, Italian scientists have carried out a successful mock single-photon exchange between Earth and a satellite orbiting at an altitude of 1485 km. Although transmission may be restricted here on Earth, the use of satellites will greatly increase the range of such a system, possibly beginning an era of long-distance quantum communication with space.

The key advantage to quantum communications is that it is perfectly secure from being hacked. In a world of security-conscious information transmission, the possibility of sending information hidden in the quantum states of photons would be highly desirable. A major drawback of sending encoded photos here on Earth is the degradation of data as the photons are scattered by atmospheric particles. The current record stands at 144 km for an encoded photon to travel along its line of sight without losing its quantum code. That distance can be increased by firing encoded photons along optical fibres.

But what if you used satellites as nodes to communicate the encoded photons through space? By shooting the photons straight up, they need only travel through 8 km of dense atmosphere. This is exactly what Paolo Villoresi and his team at the Department of Information Engineering, University of Padova with collaborators in other institutes in Italy and Austria hoped to achieve. In fact, they have already tested the “single-photon exchange” between a ground station and the Japanese Experimental Geodetic Satellite Ajisai with some good results.

Weak laser pulses, emitted by the ground-based station, are directed towards a satellite equipped with cube-corner retroreflectors. These reflect a small portion of the pulse, with an average of less-than-one photon per pulse directed to our receiver, as required for the faint-pulse quantum communication.” – From “Experimental verification of the feasibility of a quantum channel between Space and Earth“, Villoresi et al..

The communication between satellite and observatory
They achieved this feat by using existing Earth-based laser ranging technology (at the Matera Laser Ranging Observatory, Italy) to direct a weak source of photons at the Ajisai, spherical mirrored satellite (pictured top). As the powerful laser ranging beam pinpointed the satellite, it was switched off to allow the weaker encoded laser to fire pulses of data. The two lasers could easily be switched to be sure the Ajisai was receiving the photons. Only a tiny fraction of the pulses were received back at the observatory, and, statistically speaking, the requirement of less than one photon return per laser pulse for quantum communications was achieved.

This is the first step of many toward quantum communications, and it by no means demonstrates the quantum entanglement between two photons (this situation is described in great detail by one of the collaborators in a separate publication) – now that would be the ultimate form of quantum data transmission!

Source: arXiv, arXiv blog

When Black Holes Explode: Measuring the Emission from the Fifth Dimension

Exploding primordial black holes could be detected (credit: Wired.com)

Primordial black holes are remnants of the Big Bang and they are predicted to be knocking around in our universe right now. If they were 1012kg or bigger at the time of creation, they have enough mass to have survived constant evaporation from Hawking radiation over the 14 billion years since the beginning of the cosmos. But what happens when the tiny black hole evaporates so small that it becomes so tightly wrapped around the structure of a fifth dimension (other than the “normal” three spatial dimensions and one time dimension)? Well, the black hole will explosively show itself, much like an elastic band snapping, emitting energy. These final moments will signify that the primordial black hole has died. What makes this exciting is that researchers believe they can detect these events as spikes of radio wave emissions and the hunt has already begun…

Publications about primordial black holes have been very popular in recent years. There is the possibility that these ancient singularities are very common in the Universe, but as they are predicted to be quite small, their effect on local space isn’t likely to be very observable (unlike younger, super-massive black holes at the centre of galaxies or the stellar black holes remaining after supernovae). However, they could be quite mischievous. Some primordial black hole antics include kicking around asteroids if they pass through the solar system, blasting through the Earth at high velocity, or even getting stuck inside a planet, slowly eating up material like a planetary parasite.

But say if these big bang relics never come near the Earth and we never see their effect on Earth (a relief, we can do without a primordial black hole playing billiards with near Earth asteroids or the threat of a mini black hole punching through the planet!)? How are we ever going to observe these theoretical singularities?

Eight-meter-wavelength Transient Array (credit: Virginia Tech)

Now, the ultimate observatory has been realized, but it measures a fairly observable cosmic emission: radio waves. The Eight-meter-wavelength Transient Array (ETA) run by Virginia Tech Departments of Electrical & Computer Engineering and Physics, and the Pisgah Astronomical Research Institute (PARI), is currently taking high cadence radio wave observations and has been doing so for the past few months. This basic-looking antenna system, in fields in Montgomery County and North Carolina, could receive emissions in the 29-47 MHz frequencies, giving researchers a unique opportunity to see primordial black holes as they die.

Interestingly, if their predictions are correct, this could provide evidence for the existence of a fifth dimension, a dimension operating at scales of billionths of a nanometer. If this exotic emission can be received, and if it is corroborated by both antennae, this could be evidence of the string theory prediction that there are more dimensions than the four we currently understand.

The idea we’re exploring is that the universe has an imperceptibly small dimension (about one billionth of a nanometer) in addition to the four that we know currently. This extra dimension would be curled up, in a state similar to that of the entire universe at the time of the Big Bang.” – Michael Kavic, project investigator.

As black holes are wrapped around this predicted fifth dimension, as they slowly evaporate and lose mass, eventually primordial black holes will be so stressed and stretched around the fifth dimension that the black hole will die, blasting out emissions in radio wave frequencies.

String theory requires extra dimensions to be a consistent theory. String theory suggests a minimum of 10 dimensions, but we’re only considering models with one extra dimension.” – Kavic

When the Large Hadron Collider goes online in May, it is hoped that the high energies generated may produce mini-black holes (amongst other cool things) where research can be done to look for the string theory extra dimensions. But the Eight-meter-wavelength Transient Array looking for the death of “naturally occurring” primordial black holes is a far less costly endeavour and may achieve the same goal.

Here’s an article on a theory that there could be 10 dimensions.

Source: Nature