Exploring the Universe with Nuclear Power

In the past four decades, NASA and other space agencies from around the world have accomplished some amazing feats. Together, they have sent manned missions to the Moon, explored Mars, mapped Venus and Mercury, conducted surveys and captured breathtaking images of the Outer Solar System. However, looking ahead to the next generation of exploration and the more-distant frontiers that remain to be explored, it is clear that new ideas need to be put forward of how to quickly and efficiently reach those destinations.

Basically, this means finding ways to power rockets that are more fuel and cost-effective while still providing the necessary power to get crews, rovers and orbiters to their far-flung destinations. In this respect, NASA has been taking a good look at nuclear fission as a possible means of propulsion.

In fact, according to presentation made by Doctor Michael G. Houts of the NASA Marshall Space Flight Center back in October of 2014, nuclear power and propulsion have the potential to be “game changing technologies for space exploration.”

As the Marshall Space Flight Center’s manager of nuclear thermal research, Dr. Houts is well versed in the benefits it has to offer space exploration. According to the presentation he and fellow staffers made, a fission reactor can be used in a rocket design to create Nuclear Thermal Propulsion (NTP). In an NTP rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

NASA design for a Nuclear Engine for Rocket Vehicle Application (NERVA). Credit: NASA
NASA design for a Nuclear Engine for Rocket Vehicle Application (NERVA). Image Credit: NASA

A second possible method, known as Nuclear Electric Propulsion (NEC), involves the same basic reactor converted its heat and energy into electrical energy which then powers an electrical engine. In both cases, the rocket relies on nuclear fission to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Compared to this traditional form of propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel.  At a steady state, a fission reactor produces an average of 2.5 neutrons per reaction. However, it would only take a single neutron to cause a subsequent fission and produce a chain reaction and provide constant power.

In fact, according to the report, an NTP rocket could generate 200 kWt of power using a single kilogram of  uranium for a period of 13 years – which works out of to a fuel efficiency rating of about 45 grams per 1000 MW-hr.

In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This is what is known as specific impulse, which is measured either in terms of kilo-newtons per second per kilogram (kN·s/kg) or in the amount of seconds the rocket can continually fire. This would cut the total amount of propellent needed, thus cutting launch weight and the cost of individual missions. And a more powerful nuclear engine would mean reduced trip times, another cost-cutting measure.

The key elements of a NERVA solid-core nuclear-thermal engine. Credit: NASA
The key elements of a NERVA solid-core nuclear-thermal engine. Credit: NASA

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design to more advanced and efficient concepts that rely on either a liquid or a gas core.

In the case of a solid-core design, the only type that has ever been built, a reactor made from materials with a very high melting point houses a collection of solid uranium rods which undergo controlled fission. The hydrogen fuel is contained in a separate tank and then passes through tubes around the reactor, gaining heat and converted into plasma before being channeled through the nozzles to achieve thrust.

Using hydrogen propellant, a solid-core design typically delivers specific impulses on the order of 850 to 1000 seconds, which is about twice that of liquid hydrogen-oxygen designs – i.e. the Space Shuttle’s main engine.

However, a significant drawback arises from the fact that nuclear reactions in a solid-core model can create much higher temperatures than the conventional materials can withstand. The cracking of fuel coatings can also result from large temperature variations along the length of the rods, which taken together, sacrifices much of the engine’s potential for performance.

Diagram of an open-cycle, nuclear-thermal engine concept. Credit: NASA
Diagram of an open-cycle, gas design for a nuclear-thermal rocket engine. Credit: NASA

Many of these problems were addressed with the liquid core design, where nuclear fuel is mixed into the liquid hydrogen and allowing the fission reaction to take place in the liquid mixture itself. This design can operate at temperatures above the melting point of the nuclear fuel thanks to the fact that the container wall is actively cooled by the liquid hydrogen. It is also expected to deliver a specific impulse performance of 1300 to 1500 (1.3 to 1.5 kN·s/kg) seconds.

However, compared to the solid-core design, engines of this type are much more complicated, and therefore more expensive and difficult to build. Part of the problem has to do with the time it takes to achieve a fission reaction, which is significantly longer than the time it takes to heat the hydrogen fuel. Therefore, engines of this kind require methods to both trap the fuel inside the engine while simultaneously allowing heated plasma the ability to exit through the nozzle.

The final classification is the gas-core engine, a modification of the liquid-core design that uses rapid circulation to create a ring-shaped pocket of gaseous uranium fuel in the middle of the reactor that is surrounded by liquid hydrogen. In this case, the hydrogen fuel does not touch the reactor wall, so temperatures can be kept below the melting point of the materials used.

An engine of this kind could allow for specific impulses of 3000 to 5000 seconds (30 to 50 kN·s/kg). But in an “open-cycle” design of this kind, the losses of nuclear fuel would be difficult to control. An attempt to remedy this was drafted with the “closed cycle design” – aka. the “nuclear lightbulb” engine – where the gaseous nuclear fuel is contained in a series of super-high-temperature quarts containers.

Diagram of a closed-concept (aka. Lightbulb) gas core nuclear-thermal engine. Credit: NASA
The closed-concept (aka. Lightbulb) gas core nuclear-thermal rocket engine. Credit: NASA

Although this design is less efficient than the open-cycle design, and has a more in common with the solid-core concept, the limiting factor here is the critical temperature of quartz and not that of the fuel stack. What’s more, the closed-cycle design is expected to still deliver a respectable specific impulse of about 1500–2000 seconds (15–20 kN·s/kg).

However, as Houts indicated, one of the greatest assets nuclear fission has going for it is the long history of service it has enjoyed here on Earth. In addition to commercial reactors providing electricity all over the world, naval vessels (such as aircraft carriers and submarines) have made good use of slow-fission reactors for decades.

Also, NASA has been relying on nuclear reactors to power unmanned craft and rover for over four decades, mainly in the form of Radioisotope Thermoelectric Generators (RTGs) and Radioisotope Heater Units (RHU). In the case of the former, heat is generated by the slow decay of plutonium-238 (Pu-238), which is then converted into electricity. In the case of the latter, the heat itself is used to keep components and ship’s systems warm and running.

These types of generators have been used to power and maintain everything from the Apollo rockets to the Curiosity Rover, as well as countless satellites, orbiters and robots in between. Since its inception,a  total of 44 missions have been launched by NASA that have used either RTGs or RHUs, while the former-Soviet space program launched a comparatively solid 33.

Using modular components, a NTP spacecraft could be fitted for numerous missions profiles. Credit: NASA
Using modular components, a NTP spacecraft could be fitted for numerous missions profiles. Credit: NASA

Nuclear engines were also considered for a time as a replacement for the J-2, a liquid-fuel cryogenic rocket engine used on the S-II and S-IVB stages on the Saturn V and Saturn I rockets. But despite their being numerous versions of a solid-core reactors produced and tested in the past, none were ever put into service for an actual space flight.

Between 1959 and 1972, the United States tested twenty different sizes and designs during Project Rover and NASA’s Nuclear Engine for Rocket Vehicle Application (NERVA) program. The most powerful engine ever tested was the Phoebus 2a, which during a high-power test operated for a total of 32 minutes – 12 minutes of which were at power levels of more than 4.0 million kilowatts.

But looking to the future, Houts’ and the Marshall Space Flight Center see great potential and many possible applications. Examples cited in the report include long-range satellites that could explore the Outer Solar System and Kuiper Belt, fast, efficient transportation for manned missions throughout the Solar System, and even the provisions of power for settlements on the Moon and Mars someday.

One possibility is to equip NASA’s latest flagship – the Space Launch System (SLS) – with chemically-powered lower-stage engines and a nuclear-thermal engine on its upper stage. The nuclear engine would remain “cold” until the rocket had achieved orbit, at which point the upper stage would be deployed and reactor would be activated to generate thrust.

Credit: NASA
NASA proposals for nuclear-powered exploration rovers and craft. Credit: NASA

This concept for a “bimodal” rocket – one which relies on chemical propellants to achieve orbit and a nuclear-thermal engine for propulsion in space – could become the mainstay of NASA and other space agencies in the coming years. According to Houts and others at Marshall, the dramatic increase in efficiency offered by such rockets could also facilitate NASA’s plans to explore Mars by allowing for the reliable delivery of high-mass automated payloads in advance of manned missions.

These same rockets could then be retooled for speed (instead of mass) and used to transport the astronauts themselves to Mars in roughly half the time it would take for a conventional rocket to make the trip. This would not only save on time and cut mission costs, it would also ensure that the astronauts were exposed to less harmful solar radiation during the course of their flight.

To see this vision become reality, Dr. Houts and other researchers from the Marshall Space Center’s Propulsion Research and Development Laboratory are currently conducting NTP-related tests at the Nuclear Thermal Rocket Element Environmental Simulator (or “NTREES”) in Huntsville, Alabama.

Here, they have spent the past few years analyzing the properties of various nuclear fuels in a simulated thermal environment, hoping to learn more about how they might effect engine performance and longevity when it comes to a nuclear-thermal rocket engine.

Concept art showing a nuclear thermal propulsion piloted craft achieving Mars orbit. Credit: NASA
Concept art showing a nuclear thermal propulsion piloted craft achieving Mars orbit. Credit: NASA

These tests are slated to run until June of 2015, and are expected to lay the groundwork for large-scale ground tests and eventual full-scale testing in flight. The ultimate goal of all of this is to ensure that a manned mission to Mars takes place by the 2030s, and to provide NASA flight engineers and mission planners with all the information they need to see it through.

But of course, it is also likely to have its share of applications when it comes to future Lunar missions, sending crews to study Near-Earth Objects (NEOs), and sending craft to the Jovian moons and other locations in the outer Solar System. As the report shows, NTP craft can be easily modified using modular components to perform everything from Lunar cargo landings to crewed missions, to surveying Near-Earth Asteroids (NEAs).

The universe is a big place, and space exploration is still very much in its infancy. But if we intend to keep exploring it and reaping the rewards that such endeavors have to offer, our methods will have to mature. NTP is merely one proposed possibility. But unlike Nuclear Pulse Propulsion, the Daedalus concept, anti-matter engines, or the Alcubierre Warp Drive, a rocket that runs on nuclear fission is feasible, practical, and possible within the near-future.

Nuclear thermal research at the Marshall Center is part of NASA’s Advanced Exploration Systems (AES) Division, managed by the Human Exploration and Operations Mission Directorate and including participation by the U.S. Department of Energy.

Further Reading: NASA, NASA NTRS

Some of the Best Pictures of the Planets in our Solar System

Our Solar System is a pretty picturesque place. Between the Sun, the Moon, and the Inner and Outer Solar System, there is no shortage of wondrous things to behold. But arguably, it is the eight planets that make up our Solar System that are the most interesting and photogenic. With their spherical discs, surface patterns and curious geological formations, Earth’s neighbors have been a subject of immense fascination for astronomers and scientists for millennia.

And in the age of modern astronomy, which goes beyond terrestrial telescopes to space telescopes, orbiters and satellites, there is no shortage of pictures of the planets. But here are a few of the better ones, taken with high-resolutions cameras on board spacecraft that managed to capture their intricate, picturesque, and rugged beauty.

Mercury, as imaged by the MESSENGER spacecraft, revealing parts of the never seen by human eyes. Image Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington
Mercury, as imaged by the MESSENGER spacecraft, revealing parts never before seen by human eyes. Image Credit: NASA/Johns Hopkins University/Carnegie Institution of Washington

Named after the winged messenger of the gods, Mercury is the closest planet to our Sun. It’s also the smallest (now that Pluto is no longer considered a planet. At 4,879 km, it is actually smaller than the Jovian moon of Ganymede and Saturn’s largest moon, Titan.

Because of its slow rotation and tenuous atmosphere, the planet experiences extreme variations in temperature – ranging from -184 °C on the dark side and 465 °C on the side facing the Sun. Because of this, its surface is barren and sun-scorched, as seen in the image above provided by the MESSENGER spacecraft.

A radar view of Venus taken by the Magellan spacecraft, with some gaps filled in by the Pioneer Venus orbiter. Credit: NASA/JPL
A radar view of Venus taken by the Magellan spacecraft, with some gaps filled in by the Pioneer Venus orbiter. Credit: NASA/JPL

Venus is the second planet from our Sun, and Earth’s closest neighboring planet. It also has the dubious honor of being the hottest planet in the Solar System. While farther away from the Sun than Mercury, it has a thick atmosphere made up primarily of carbon dioxide, sulfur dioxide and nitrogen gas. This causes the Sun’s heat to become trapped, pushing average temperatures up to as high as 460°C. Due to the presence of sulfuric and carbonic compounds in the atmosphere, the planet’s atmosphere also produces rainstorms of sulfuric acid.

Because of its thick atmosphere, scientists were unable to examine of the surface of the planet until 1970s and the development of radar imaging. Since that time, numerous ground-based and orbital imaging surveys have produced information on the surface, particularly by the Magellan spacecraft (1990-94). The pictures sent back by Magellan revealed a harsh landscape dominated by lava flows and volcanoes, further adding to Venus’ inhospitable reputation.

Earth viewed from the Moon by the Apollo 11 spacecraft. Credit: NASA
Earth viewed from the Moon by the Apollo 11 spacecraft. Credit: NASA

Earth is the third planet from the Sun, the densest planet in our Solar System, and the fifth largest planet. Not only is 70% of the Earth’s surface covered with water, but the planet is also in the perfect spot – in the center of the hypothetical habitable zone – to support life. It’s atmosphere is primarily composed of nitrogen and oxygen and its average surface temperatures is 7.2°C. Hence why we call it home.

Being that it is our home, observing the planet as a whole was impossible prior to the space age. However, images taken by numerous satellites and spacecraft – such as the Apollo 11 mission, shown above – have been some of the most breathtaking and iconic in history.

The first true-colour image of Mars from ESA’s Rosetta generated using the OSIRIS orange (red), green and blue colour filters. The image was acquired on 24 February 2007 at 19:28 CET from a distance of about 240 000 km. Credit: MPS for OSIRIS Team MPS/UPD/LAM/ IAA/ RSSD/ INTA/ UPM/ DASP/ IDA
The first true-colour image of Mars taken by the ESA’s Rosetta spacecraft on 24 February 2007. Credit: MPS for OSIRIS Team MPS/UPD/LAM/ IAA/ RSSD/ INTA/ UPM/ DASP/ IDA

Mars is the fourth planet from our Sun and Earth’s second closest neighbor. Roughly half the size of Earth, Mars is much colder than Earth, but experiences quite a bit of variability, with temperatures ranging from 20 °C at the equator during midday, to as low as -153 °C at the poles. This is due in part to Mars’ distance from the Sun, but also to its thin atmosphere which is not able to retain heat.

Mars is famous for its red color and the speculation it has sparked about life on other planets. This red color is caused by iron oxide – rust – which is plentiful on the planet’s surface. It’s surface features, which include long “canals”, have fueled speculation that the planet was home to a civilization.

Observations made by satellites flybys in the 1960’s (by the Mariner 3 and 4 spacecraft) dispelled this notion, but scientists still believe that warm, flowing water once existed on the surface, as well as organic molecules. Since that time, a small army of spacecraft and rovers have taken the Martian surface, and have produced some of the most detailed and beautiful photos of the planet to date.

Jupiter's Great Red Spot and Ganymede's Shadow. Image Credit: NASA/ESA/A. Simon (Goddard Space Flight Center)
Jupiter’s Great Red Spot and Ganymede’s Shadow. Image Credit: NASA/ESA/A. Simon (Goddard Space Flight Center)

Jupiter, the closest gas giant to our Sun, is also the largest planet in the Solar System. Measuring over 70,000 km in radius, it is 317 times more massive than Earth and 2.5 times more massive than all the other planets in our Solar System combined. It also has the most moons of any planet in the Solar System, with 67 confirmed satellites as of 2012.

Despite its size, Jupiter is not very dense. The planet is comprised almost entirely of gas, with what astronomers believe is a core of metallic hydrogen. Yet, the sheer amount of pressure, radiation, gravitational pull and storm activity of this planet make it the undisputed titan of our Solar System.

Jupiter has been imaged by ground-based telescopes, space telescopes, and orbiter spacecraft. The best ground-based picture was taken in 2008 by the ESO’s Very Large Telescope (VTL) using its Multi-Conjugate Adaptive Optics Demonstrator (MAD) instrument. However, the greatest images captured of the Jovian giant were taken during flybys, in this case by the Galileo and Cassini missions.

Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute. Assembled by Gordan Ugarkovic.
Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute/Gordan Ugarkovic

Saturn, the second gas giant closest to our Sun, is best known for its ring system – which is composed of rocks, dust, and other materials. All gas giants have their own system of rings, but Saturn’s system is the most visible and photogenic. The planet is also the second largest in our Solar System, and is second only to Jupiter in terms of moons (62 confirmed).

Much like Jupiter, numerous pictures have been taken of the planet by a combination of ground-based telescopes, space telescopes and orbital spacecraft. These include the Pioneer, Voyager, and most recently, Cassini spacecraft.

Uranus, seen by Voyager 2. Image credit: NASA/JPL
Uranus, seen by Voyager 2 spacecraft. Image credit: NASA/JPL

Another gas giant, Uranus is the seventh planet from our Sun and the third largest planet in our Solar System. The planet contains roughly 14.5 times the mass of the Earth, but it has a low density. Scientists believe it is composed of a rocky core that is surrounded by an icy mantle made up of water, ammonia and methane ice, which is itself surrounded by an outer gaseous atmosphere of hydrogen and helium.

It is for this reason that Uranus is often referred to as an “ice planet”. The concentrations of methane are also what gives Uranus its blue color. Though telescopes have captured images of the planet, only one spacecraft has even taken pictures of Uranus over the years. This was the Voyager 2 craft which performed a flyby of the planet in 1986.

Neptune from Voyager 2. Image credit: NASA/JPL
Neptune from Voyager 2. Image credit: NASA/JPL

Neptune is the eight planet of our Solar System, and the farthest from the Sun. Like Uranus, it is both a gas giant and ice giant, composed of a solid core surrounded by methane and ammonia ices, surrounded by large amounts of methane gas. Once again, this methane is what gives the planet its blue color.  It is also the smallest gas giant in the outer Solar System, and the fourth largest planet.

All of the gas giants have intense storms, but Neptune has the fastest winds of any planet in our Solar System. The winds on Neptune can reach up to 2,100 kilometers per hour, and the strongest of which are believed to be the Great Dark Spot, which was seen in 1989, or the Small Dark Spot (also seen in 1989). In both cases, these storms and the planet itself were observed by the Voyager 2 spacecraft, the only one to capture images of the planet.

Universe Today has many interesting articles on the subject of the planets, such as interesting facts about the planets and interesting facts about the Solar System.

If you are looking for more information, try NASA’s Solar System exploration page and an overview of the Solar System.

Astronomy Cast has episodes on all of the planets including Mercury.

One of the Milky Way’s Arms Might Encircle the Entire Galaxy

Given that our Solar System sits inside the Milky Way Galaxy, getting a clear picture of what it looks like as a whole can be quite tricky. In fact, it was not until 1852 that astronomer Stephen Alexander first postulated that the galaxy was spiral in shape. And since that time, numerous discoveries have come along that have altered how we picture it.

For decades astronomers have thought the Milky Way consists of four arms — made up of stars and clouds of star-forming gas — that extend outwards in a spiral fashion. Then in 2008, data from the Spitzer Space Telescope seemed to indicate that our Milky Way has just two arms, but a larger central bar. But now, according to a team of astronomers from China, one of our galaxy’s arms may stretch farther than previously thought, reaching all the way around the galaxy.

This arm is known as Scutum–Centaurus, which emanates from one end of the Milky Way bar, passes between us and Galactic Center, and extends to the other side of the galaxy. For many decades, it was believed that was where this arm terminated.

However, back in 2011, astronomers Thomas Dame and Patrick Thaddeus from the Harvard–Smithsonian Center for Astrophysics spotted what appeared to be an extension of this arm on the other side of the galaxy.

Star-forming region in interstellar space.  Image credit: NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration
Star-forming region in interstellar space. Image credit: NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration

But according to astronomer Yan Sun and colleagues from the Purple Mountain Observatory in Nanjing, China, the Scutum–Centaurus Arm may extend even farther than that. Using a novel approach to study gas clouds located between 46,000 to 67,000 light-years beyond the center of our galaxy, they detected 48 new clouds of interstellar gas, as well as 24 previously-observed ones.

For the sake of their study, Sun and his colleagues relied on radio telescope data provided by the Milky Way Imaging Scroll Painting project, which scans interstellar dust clouds for radio waves emitted by carbon monoxide gas. Next to hydrogen, this gas is the most abundant element to be found in interstellar space – but is easier for radio telescopes to detect.

Combining this information with data obtained by the Canadian Galactic Plane Survey (which looks for hydrogen gas), they concluded that these 72 clouds line up along a spiral-arm segment that is 30,000 light-years in length. What’s more, they claim in their report that: “The new arm appears to be the extension of the distant arm recently discovered by Dame & Thaddeus (2011) as well as the Scutum-Centaurus Arm into the outer second quadrant.”

Ilustration of our galaxy, showing our Sun (red dot) and the possible extension of the Scutum-Centaurus Arm. CREDIT: Modified from "A Possible Extension of the Scutum-Centaurus Arm into the Outer Second Quadrant" by Yan Sun et al., in The Astrophysical Journal Letters, Vol. 798, January 2015; Robert Hurt. NASA/JPL-Caltech/SSC (background spiral).
Illustration of our galaxy showing the possible extension of the Scutum-Centaurus Arm. CREDIT: Yan Sun/The Astrophysical Journal Letters, Vol. 798/Robert Hurt. NASA/JPL-Caltech/SSC

This would mean the arm is not only the single largest in our galaxy, but is also the only one to effectively reach 360° around the Milky Way. Such a find would be unprecedented given the fact that nothing of the sort has been observed with other spiral galaxies in our local universe.

Thomas Dame, one of the astronomers who discovered the possible extension of the Scutum-Centaurus Arm in 2011, was quoted by Scientific American as saying: “It’s rare. I bet that you would have to look through dozens of face-on spiral galaxy images to find one where you could convince yourself you could track one arm 360 degrees around.”

Naturally, the prospect presents some problems. For one, there is an apparent gap between the segment that Dame and Thaddeus discovered in 2011 and the start of the one discovered by the Chinese team –  a 40,000 light-year gap to be exact. This could mean that the clouds that Sun and his colleagues discovered may not be part of the Scutum-Centaurus Arm after all, but an entirely new spiral-arm segment.

If this is true, than it would mean that our Galaxy has several “outer” arm segments. On the other hand, additional research may close that gap (so to speak) and prove that the Milky Way is as beautiful when seen afar as any of the spirals we often observe from the comfort of our own Solar System.

Further Reading: arXiv Astrophysics, The Astrophysical Letters

Faster-Than-Light Lasers Could “Illuminate” the Universe

It’s a cornerstone of modern physics that nothing in the Universe is faster than the speed of light (c). However, Einstein’s theory of special relativity does allow for instances where certain influences appear to travel faster than light without violating causality. These are what is known as “photonic booms,” a concept similar to a sonic boom, where spots of light are made to move faster than c.

And according to a new study by Robert Nemiroff, a physics professor at Michigan Technological University (and co-creator of Astronomy Picture of the Day), this phenomena may help shine a light (no pun!) on the cosmos, helping us to map it with greater efficiency.

Consider the following scenario: if a laser is swept across a distant object – in this case, the Moon – the spot of laser light will move across the object at a speed greater than c. Basically, the collection of photons are accelerated past the speed of light as the spot traverses both the surface and depth of the object.

The resulting “photonic boom” occurs in the form of a flash, which is seen by the observer when the speed of the light drops from superluminal to below the speed of light. It is made possible by the fact that the spots contain no mass, thereby not violating the fundamental laws of Special Relativity.

An image of NGC 2261 (aka. Hubble's Variable Nebula) by the Hubble space telescope. Credit: HST/NASA/JPL.
An image of NGC 2261 (aka. Hubble’s Variable Nebula) by the Hubble space telescope. Image Credit: HST/NASA/JPL.

Another example occurs regularly in nature, where beams of light from a pulsar sweep across clouds of space-borne dust, creating a spherical shell of light and radiation that expands faster than c when it intersects a surface. Much the same is true of fast-moving shadows, where the speed can be much faster and not restricted to the speed of light if the surface is angular.

At a meeting of the American Astronomical Society in Seattle, Washington earlier this month, Nemiroff shared how these effects could be used to study the universe.

“Photonic booms happen around us quite frequently,” said Nemiroff in a press release, “but they are always too brief to notice. Out in the cosmos they last long enough to notice — but nobody has thought to look for them!”

Superluminal sweeps, he claims, could be used to reveal information on the 3-dimensional geometry and distance of stellar bodies like nearby planets, passing asteroids, and distant objects illuminated by pulsars. The key is finding ways to generate them or observe them accurately.

For the purposes of his study, Nemiroff considered two example scenarios. The first involved a beam being swept across a scattering spherical object – i.e. spots of light moving across the Moon and pulsar companions. In the second, the beam is swept across a “scattering planar wall or linear filament” – in this case, Hubble’s Variable Nebula.

Artist view of an asteroid (with companion) passing near Earth. Credit: P. Carril / ESA
Photonic booms caused by laser sweeps could offer a new imaging technique for mapping out passing asteroids. Credit: P. Carril / ESA

In the former case, asteroids could be mapped out in detail using a laser beam and a telescope equipped with a high-speed camera. The laser could be swept across the surface thousands of times a second and the flashes recorded. In the latter, shadows are observed passing between the bright star R Monocerotis and reflecting dust, at speeds so great that they create photonic booms that are visible for days or weeks.

This sort of imaging technique is fundamentally different from direct observations (which relies on lens photography), radar, and conventional lidar. It is also distinct from Cherenkov radiation – electromagnetic radiation emitted when charged particles pass through a medium at a speed greater than the speed of light in that medium. A case in point is the blue glow emitted by an underwater nuclear reactor.

Combined with the other approaches, it could allow scientists to gain a more complete picture of objects in our Solar System, and even distant cosmological bodies.

Nemiroff’s study accepted for publication by the Publications of the Astronomical Society of Australia, with a preliminary version available online at arXiv Astrophysics

Further reading:
Michigan Tech press release
Robert Nemiroff/Michigan Tech

New Mission: DSCOVR Satellite will Monitor the Solar Wind

Solar wind – that is, the stream of charged electrons and protons that are released from the upper atmosphere of the Sun – is a constant in our Solar System and generally not a concern for us Earthlings. However, on occasion a solar wind shock wave or Coronal Mass Ejection can occur, disrupting satellites, electronics systems, and even sending harmful radiation to the surface.

Little wonder then why NASA and the National Oceanic and Atmospheric Administration (NOAA) have made a point of keeping satellites in orbit that can maintain real-time monitoring capabilities. The newest mission, the Deep Space Climate Observatory (DSCOVR) is expected to launch later this month.

A collaborative effort between NASA, the NOAA, and the US Air Force, the DSCOVR mission was originally proposed in 1998 as a way of providing near-continuous monitoring of Earth. However, the $100 million satellite has since been re-purposed as a solar observatory.

In this capacity, it will provide support to the National Weather Service’s Space Weather Prediction Center, which is charged with providing advanced warning forecasts of approaching geomagnetic storms for people here on Earth.

Illustration showing the DSCOVR satellite in orbit L1 orbit, located one million miles away from Earth. At this location, the satellite will be in the best position to monitor the constant stream of particles from the sun, known as solar wind, and provide warnings of approaching geomagnetic storms caused by solar wind about an hour before they reach Earth. Credit: NOAA
Illustration showing the DSCOVR satellite in L1 orbit, located 1.5 million km  (930,000 mi) away from Earth. Credit: NOAA

These storms, which are caused by large-scale fluctuations in solar wind, have the potential of disrupting radio signals and electronic systems, which means that everything from telecommunications, aviation, GPS systems, power grids, and every other major bit of infrastructure is vulnerable to them.

In fact, a report made by the National Research Council estimated that recovering from the most extreme geomagnetic storms could take up to a decade, and cost taxpayers in the vicinity of $1 to $2 trillion dollars. Add to the that the potential for radiation poisoning to human beings (at ground level and in orbit), as well as flora and fauna, and the need for alerts becomes clear.

Originally, the satellite was scheduled to be launched into space on Jan. 23rd from the Cape Canaveral Air Force Station, Florida. However, delays in the latest resupply mission to the International Space Station have apparently pushed the date of this launch back as well.

According to a source who spoke to SpaceNews, the delay of the ISS resupply mission caused scheduling pressure, as both launches are being serviced by SpaceX from Cape Canaveral. However, the same source indicated that there are no technical problems with the satellite or the Falcon 9 that will be carrying it into orbit. It is now expected to be launched on Jan. 29th at the latest.

Credit: NOAA
SpaceX will be providing the launch service for DSCOVR, which is now expected to be launched by the end of Jan aboard a Falcon 9 rocket (pictured here). Credit: NOAA

Once deployed, DSCOVR will eventually take over from NASA’s aging Advanced Composition Explorer (ACE) satellite, which has been in providing solar wind alerts since 1997 and is expected to remain in operation until 2024. Like ACE, the DSCOVER will orbit Earth at Lagrange 1 Point (L1), the neutral gravity point between the Earth and sun approximately 1.5 million km (930,000 mi) from Earth.

From this position, DSCOVR will be able to provide advanced warning, roughly 15 to 60 minutes before a solar wind shockwave or CME reaches Earth. This information will be essential to emergency preparedness efforts, and the data provided will also help improve predictions as to where a geomagnetic storm will impact the most.

These sorts of warnings are essential to maintaining the safety and integrity of infrastructure, but also the health and well-being of people here on Earth. Given our dependence on high-tech navigation systems, electricity, the internet, and telecommunications, a massive geomagnetic storm is not something we want to get caught off guard by!

And be sure to check out this video of the DSCOVR mission, courtesy of the NOAA:

Further Reading: NOAA

Japan’s Akatsuki Spacecraft to Make Second Attempt to Enter Orbit of Venus in December 2015

Back in 2010, the Japanese Aerospace Exploration Agency (JAXA) launched the The Venus Climate Orbiter “Akatsuki” with the intention of learning more about the planet’s weather and surface conditions. Unfortunately, due to engine trouble, the probe failed to make it into the planet’s orbit.

Since that time, it has remained in a heliocentric orbit, some 134 million kilometers from Venus, conducting scientific studies on the solar wind. However, JAXA is going to make one more attempt to slip the probe into Venus’ orbit before its fuel runs out.

Since 2010, JAXA has been working to keep Akatsuki functioning so that they could give the spacecraft another try at entering Venus’ orbit.

After a thorough examination of all the possibilities for the failure, JAXA determined that the probe’s main engine burned out as it attempted to decelerate on approach to the planet. They claim this was likely due to a malfunctioning valve in the spacecraft’s fuel pressure system caused by salt deposits jamming the valve between the helium pressurization tank and the fuel tank. This resulted in high temperatures that damaged the engine’s combustion chamber throat and nozzle.

A radar view of Venus taken by the Magellan spacecraft, with some gaps filled in by the Pioneer Venus orbiter. Credit: NASA/JPL
A radar view of Venus taken by the Magellan spacecraft, with some gaps filled in by the Pioneer Venus orbiter. Credit: NASA/JPL

JAXA adjusted the spacecraft’s orbit so that it would establish a heliocentric orbit, with the hopes that it would be able to swing by Venus again in the future. Initially, the plan was to make another orbit insertion attempt by the end 2016 when the spacecraft’s orbit would bring it back to Venus. But because the spacecraft’s speed has slowed more than expected, JAXA determined if they slowly decelerated Akatsuki even more, Venus would “catch up with it” even sooner. A quicker return to Venus would also be advantageous in terms of the lifespan of the spacecraft and its equipment.

But this second chance will likely be the final chance, depending on how much damage there is to the engines and other systems. The reasons for making this final attempt are quite obvious. In addition to providing vital information on Venus’ meteorological phenomena and surface conditions, the successful orbital insertion of Akatsuki would also be the first time that Japan deployed a satellite around a planet other than Earth.

If all goes well, Akatsuki will enter orbit around Venus at a distance of roughly 300,000 to 400,000 km from the surface, using the probe’s 12 smaller engines since the main engine remains non-functional. The original mission called for the probe to establish an elliptical orbit that would place it 300 to 80,000 km away from Venus’ surface.

This wide variation in distance was intended to provide the chance to study the planet’s meteorological phenomena and its surface in detail, while still being able to observe atmospheric particles escaping into space.

Artist's impression of Venus Express entering orbit in 2006. Credit: ESA - AOES Medialab
Artist’s impression of Venus Express entering orbit in 2006. Image Credit: ESA – AOES Medialab

At a distance of 400,000 km, the image quality and opportunities to capture them are expected to be diminished. However, JAXA is still confident that it will be able to accomplish most of the mission’s scientific goals.

In its original form, these goals included obtaining meteorological information on Venus using four cameras that capture images in the ultraviolet and infrared wavelengths. These would be responsible for globally mapping clouds and peering beneath the veil of the planet’s thick atmosphere.

Lightning would be detected with a high-speed imager, and radio-science monitors would observe the vertical structure of the atmosphere. In so doing, JAXA hopes to confirm the existence of surface volcanoes and lighting, both of which were first detected by the ESA’s Venus Express spacecraft. One of the original aims of Akatsuki was to complement the Venus Express mission. But Venus Express has now completed its mission, running out of gas and plunging into the planet’s atmosphere.

But most of all, it is hoped that Akatsuki can provide observational data on the greatest mystery of Venus, which has to do with its surface storms.

Artists impression of lightning storms on Venus. Credit: ESA
Artists impression of lightning storms on Venus. Credit: ESA

Previous observations of the planet have shown that winds that can reach up to 100 m/s (360 km/h or ~225 mph) circle the planet every four to five Earth days. This means that Venus experiences winds that are up to 60 times faster than the speed at which the planet turns, a phenomena known as “Super-rotation”.

Here on Earth, the fastest winds are only capable of reaching between 10 and 20 percent of the planet’s rotation. As such, our current meteorological understanding does not account for these super-high speed winds, and it is hoped that more information on the atmosphere will provide some clues as to how this can happen.

Between the extremely thick clouds, sulfuric rain storms, lightning, and high-speed winds, Venus’ atmosphere is certainly very interesting! Add to the fact that the volcanic, pockmarked surface cannot be surveyed without the help of sophisticated radar or IR imaging, and you begin to understand why JAXA is eager to get their probe into orbit while they still can.

And be sure to check out this video, courtesy of JAXA, detailing the Venus Climate Orbiter mission:

Further Reading: JAXA

Exoplanet-Hunting TESS Satellite to be Launched by SpaceX

A conceptual image of the Transiting Exoplanet Survey Satellite. Image Credit: MIT

The search for exoplanets is heating up, thanks to the deployment of space telescopes like Kepler and the development of new observation methods. In fact, over 1800 exoplanets have been discovered since the 1980s, with 850 discovered just last year. That’s quite the rate of progress, and Earth’s scientists have no intention of slowing down!

Hot on the heels of the Kepler mission and the ESA’s deployment of the Gaia space observatory last year, NASA is getting ready to launch TESS (the Transiting Exoplanet Survey Satellite). And to provide the launch services, NASA has turned to one of its favorite commercial space service providers – SpaceX.

The launch will take place in August 2017 from the Cape Canaveral Air Force Station in Florida, where it will be placed aboard a Falcon 9 v1.1 – a heavier version of the v 1.0 developed in 2013. Although NASA has contracted SpaceX to perform multiple cargo deliveries to the International Space Station, this will be only the second time that SpaceX has assisted the agency with the launch of a science satellite.

This past September, NASA also signed a lucrative contract with SpaceX worth $2.6 billion to fly astronauts and cargo to the International Space Station. As part of the Commercial Crew Program, SpaceX’s Falcon 9 and Dragon spacecraft were selected by NASA to help restore indigenous launch capability to the US.

James Webb Space Telescope. Image credit: NASA/JPL
Artist’s impression of the James Webb Space Telescope, the space observatory scheduled for launch in 2018. Image Credit: NASA/JPL

The total cost for TESS is estimated at approximately $87 million, which will include launch services, payload integration, and tracking and maintenance of the spacecraft throughout the course of its three year mission.

As for the mission itself, that has been the focus of attention for many years. Since it was deployed in 2009, the Kepler spacecraft has yielded more and more data on distant planets, many of which are Earth-like and potentially habitable. But in 2013, two of four reaction wheels on Kepler failed and the telescope has lost its ability to precisely point toward stars. Even though it is now doing a modified mission to hunt for exoplanets, NASA and exoplanet enthusiasts have been excited by the prospect of sending up another exoplanet hunter, one which is even more ideally suited to the task.

Once deployed, TESS will spend the next three years scanning the nearest and brightest stars in our galaxy, looking for possible signs of transiting exoplanets. This will involve scanning nearby stars for what is known as a “light curve”, a phenomenon where the visual brightness of a star drops slightly due to the passage of a planet between the star and its observer.

By measuring the rate at which the star dims, scientists are able to estimate the size of the planet passing in front of it. Combined with measurements the star’s radial velocity, they are also able to determine the density and physical structure of the planet. Though it has some drawbacks, such as the fact that stars rarely pass directly in front of their host stars, it remains the most effective means of observing exoplanets to date.

Number of extrasolar planet discoveries per year through September 2014, with colors indicating method of detection:   radial velocity   transit   timing   direct imaging   microlensing. Image Credit: Public domain
Number of extrasolar planet discoveries on up to Sept. 2014, with colors indicating method of detection. Blue: radial velocity; Green: transit; Yellow: timing, Red: direct imaging; Orange: microlensing. Image Credit: Alderon/Wikimedia Commons

In fact, as of 2014, this method became the most widely used for determining the presence of exoplanets beyond our Solar System. Compared to other methods – such as measuring a star’s radial velocity, direct imaging, the timing method, and microlensing – more planets have been detected using the transit method than all the other methods combined.

In addition to being able to spot planets by the comparatively simple method of measuring their light curve, the transit method also makes it possible to study the atmosphere of a transiting planet. Combined with the technique of measuring the parent star’s radial velocity, scientists are also able to measure a planet’s mass, density, and physical characteristics.

With TESS, it will be possible to study the mass, size, density and orbit of exoplanets. In the course of its three-year mission, TESS will be looking specifically for Earth-like and super-Earth candidates that exist within their parent star’s habitable zone.

This information will then be passed on to Earth-based telescopes and the James Webb Space Telescope – which will be launched in 2018 by NASA with assistance from the European and Canadian Space Agencies – for detailed characterization.

The TESS Mission is led by the Massachusetts Institute of Technology – who developed it with seed funding from Google – and is overseen by the Explorers Program at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

Further Reading: NASA, SpaceX

 

Rogue Star HIP 85605 on Collision Course with our Solar System, but Earthlings Need Not Worry

It’s known as HIP 85605, one of two stars that make up a binary in the Hercules constellation roughly 16 light years away. And if a recent research paper produced by Dr. Coryn Bailer-Jones of the Max Planck Institute for Astronomy in Heidelberg, Germany is correct, it is on a collision course with our Solar System.

Now for the good news: according to Bailer-Jones’ calculations, the star will pass by our Solar System at a distance of 0.04 parsecs, which is equivalent to 8,000 times the distance between the Earth and the Sun (8,000 AUs). In addition, this passage will not affect Earth or any other planet’s orbit around the Sun. And perhaps most importantly of all, none of it will be happening for another 240,000 to 470,000 years from now.

“Even though the galaxy contains very many stars,” Bailer-Jones told Universe Today via email, “the spaces between them are huge. So even over the (long) life of our galaxy so far, the probability of any two stars have actually collided — as opposed to just coming close — is extremely small.”

However, in astronomical terms, that still counts as a near-miss. In a universe that is 46 billion light years in any direction – and that’s just the observable part of it – an event that is expected to take place just 50 light days away is considered to be pretty close. And in the context of space and time, a quarter of a million to half a million years is the very near future.

The real concern is the effect that the passage of HIP 85605 could have on the Oort Cloud – the massive cloud of icy planetesimals that surrounds the Solar System. Given that it’s distance is between 20,000 and 50,000 AU from our Sun, HIP 85605 would actually move through the Oort cloud and cause serious disruption.

The layout of the solar system, including the Oort Cloud, on a logarithmic scale. Credit: NASA
The layout of the Solar System, including the Oort Cloud, which lies 50,000 AU from our Sun. Credit: NASA

Many of these planetesimals could be blown off into space, but others could be sent hurtling towards Earth. Assuming humanity is still around at this point in time, this could present a bit of an inconvenience, even if it is spread over the course of a million years.

As it stands, such “close encounters” between stars are quite rare. Stellar collisions usually only occur within binaries, where white dwarfs or neutron stars are concerned. “The exception to this is physically bound binary stars in a tight orbit,” said Bailer-Jones. “It can and does happen that one star expands during its evolution and will then interfere with the evolution of the other star. Neutron-neutron star pairs can even merge.”

But of course, on an astronomical timescale, stars passing each other by as they perform their cosmic dance is actually a pretty common occurrence. As part of Bailer-Jones larger study of over 50,000 stars within our galaxy, this “close encounter”  is one of several predicted to take place in the coming years.

Of all of them, only HIP 85605 is expected to come within a single parsec between 240 and 470 thousand years from now. He also indicates with (90% confidence) that the last time such an encounter took place was 3.8 million years ago when gamma Microscopii – a G7 giant which has two and a half times the mass of our Sun – came within 0.35-1.34 pc of our system, which may have caused a large perturbation in the Oort cloud.

Chandra data (above, graph) on J0806 show that its X-rays vary with a period of 321.5 seconds, or slightly more than five minutes. This implies that the X-ray source is a binary star system where two white dwarf stars are orbiting each other (above, illustration) only 50,000 miles apart, making it one of the smallest known binary orbits in the Galaxy. According to Einstein's General Theory of Relativity, such a system should produce gravitational waves - ripples in space-time - that carry energy away from the system and cause the stars to move closer together. X-ray and optical observations indicate that the orbital period of this system is decreasing by 1.2 milliseconds every year, which means that the stars are moving closer at a rate of 2 feet per year.
Tightly bound binary stars, like the ones illustrated here, sometimes result in stellar collisions. Credit: Chandra

On his MPIA webpage, in the study’s FAQ section, Bailer-Jones claims that his research into stellar close encounters was motivated by a desire to study the potential impacts of astronomical phenomena on Earth, and is part of a larger program named “astroimpacts”.

“I am interested in the history of the Earth,” he says, “and astronomical phenomena have clearly played a role in this. But what role precisely, how significant, and what can we expect to happen in the future?” Whereas several studies have been conducted in the past, he feels that the methods – which include assuming a linear relative motion of stars – produces inaccurate results.”

In contrast, Bailer-Jones study relies on “more recent data or re-analyses of data to produce hopefully more accurate results, and then compensate more rigorously for the uncertainties in the data, so that I can attach probabilities to my statements.”

As a result of this, he predicts that HIP 85605 has a 90% chance of passing within a single parsec of our Sun in the next 240 to 470 thousands years. However, he also admits that if the astronomy is incorrect, the next closest encounter won’t be happening for another 1.3 million years, when a K7 dwarf known as GL 710 is predicted to pass within 0.10 – 0.44 parsecs.

Bailer-Jones also believes that the European Space Agency’s Gaia spacecraft will help make more accurate predictions in the future. By understanding and mapping the environment of the Milky Way Galaxy, measuring the gravitational potential and determining the velocity of stars, scientists will be able to see how their various orbits around the galaxy’s center could cause them to intersect.

Artistic impression of what Kepler-186f may look like. Image Credit:  NASA Ames/SETI Institute/JPL-CalTech
It is likely that passing stars have a system of exoplanets (like Kepler-186f pictured here), which would place them within a few parsecs of Earth. Image Credit: NASA Ames/SETI Institute/JPL-CalTech

But perhaps the most interesting question explored on his webpage is the possibility of using stellar close encounters as a shortcut for exploring exoplanets. According to current cosmological models, the majority of stars within our galaxy are believed to host exoplanets.

So if a star is passing us at just a few parsecs (or even with a single parsec) why not hop on over and investigate its planets? Well, as Bailer-Jones indicates, that’s not really a practical idea: “Traveling to a star passing our solar system at a distance of around 1 pc with a relative speed of 30 km/s is no easier than traveling the the nearby stars (the nearest of which is just over 1 pc away). And we would have to wait 10s of thousands of years for the next encounter. If we can ever achieve interstellar travel, I don’t suppose it would take that long to achieve, so why wait?”

Darn. Still, if there’s one thing this phenomena and Bailer-Jones study reminds us, it is that in the course of dancing around the center of the Milky Way, stars are not fixed in a single point in space. Not only do they periodically move within reach of each other, they can also have an affect on life within them.

Alas, the timescale on which such things happen, not to mention the consequences they entail, are so large that people here on Earth need not worry. By the time HIP 85605 or GL 710 come within a parsec or two of us, we’ll either be long-since dead or too highly evolved to care!

*Update: According to a new study posted by Erick E. Mamajek and associates on arXiv, the passage of the recently-discovered low mass star W0720 (aka. “Scholtz Star”) – roughly 70,000 years ago and at a distance of 0.25 Parsecs from our Sun – was the closest encounter our Solar System has had with another star. They calculate the possibility that it would have penetrated the System’s Outer Oort Cloud at 98%. However, they also estimate that the impact it would have had on the flux of long-period comets was negligible, but that the passage also highlights how “dynamically important Oort Cloud perturbers may be lurking among nearby stars”.

Having read the study, Bailer-Jones claims on the updated FAQ section of his MPIA webpage that their analysis appears to be correct. Based on the assumption that the star was moving on a constant velocity relative to the Sun prior to the encounter, he agrees that the calculations on the distances and timing of the passage are valid. While his own study identified a possible closer encounter (Hip 85605), he reiterates that the data on this star is of poor quality. Meanwhile, another close encounter took place involving Hip 89825; but here, the approach distance is estimated to have been 0.02 Parsecs larger. Hence, W0720 can be said to have been the closest encounter with some degree of certainty at this time.

The study appeared on Feb. 16th at arXiv Astrophysics.

Further Reading: arXiv Astrophysics, Max Planck Institute of Astronomy

2015 Expected to be a Record-Breaking Year for Soyuz-2 Workhorse

2014 was a banner year for the Russian Space Agency, with a record-setting fourteen launches of the next generation unmanned Soyuz-2 rocket. A number of other firsts took place in the course of the year as well, cementing the Soyuz family of rockets as the most flown and most reliable rocket group ever.

But already it seems as though the new year will be an even better year, with a full 20 missions already scheduled to take place, a number of them holdovers from 2014.

The Soyuz 2 launcher currently operates alongside the Soyuz-U (mainly used for launching the unmanned Progress Resupply Spacecraft to the International Space Station) and the Soyuz FG (primarily used for human flights with the Soyuz Spacecraft for missions to ISS), but according to Spaceflight 101, the Soyuz 2 will eventually replace the other vehicles once they are phased out.

In fact, in October of 2014, the Soyuz 2 had its first launch of a Progress cargo spacecraft. Other achievements were that the last two launches of the year were conducted without the aid of DM blocks – a derivative of the Blok D upper stage launch rocket developed during the 1960’s.

As Leonid Shalimov, the CEO of NPO Avtomatiki, the Russian electronic engineering and research organization, said in an interview with the government-owned Russian news agency TASS: “Fourteen launches of Soyuz-2 were carried out in 2014 – a record number in the company history,” he said. “Meanwhile, a total of 19 launches were planned in the outgoing year, five have been postponed till 2015.”

Soyuz-2 rocket preparing to launch from the Plesetsk Cosmodrome in June, 2013. Image Credit: Russian Space News
Soyuz-2 rocket preparing to launch from the Plesetsk Cosmodrome in June, 2013. Image Credit: Russian Space News

As a leader in the development of radio-electronic equipment and rocket space systems, the company is behind the development of a number of automated and integrated control systems that are used in space, at sea, heavy industry, and by oil and natural gas companies.

However, it is arguably the company’s work with Soyuz-2 rockets that has earned the most attention. As a general designation for the newest version of the rocket, the Soyuz-2 is essentially a three-stage rocket carrier and will be used to transport crews and supplies into Low Earth Orbit (LEO).

Compared to previous generations of the rocket, the Soyuz-2 features updated engines with improved injection systems on the first-stage boosters, as well as the two core engine stages.

Unlike previous incarnations, the Soyuz-2 can also be launched from a fixed launched platform since they are capable of performing rolls while in flight to change their heading. The old analog control systems have also been upgraded with a new digital flight control and telemetry systems that can adapt to changing conditions in mid-flight.

Russia is developing a new generation Advanced Crew Transportation System. Its first flight to the Moon is planned for 2028. Credit: TASS
The Advanced Crew Transportation System, a next-generation reusable craft intended for a Russian lunar mission in 2028. Credit: TASS

In total, some 42 launches of this rocket have taken place over the past decade, the first taking place on November 8th, 2004  from the Plesetsk Cosmodrome – located about 200 km outside of Archangel.

The majority of launches were for the sake of deploying weather, observation and communication satellites.

You can see a full list of Soyuz launches and missions scheduled for 2015 here at the RussianSpaceWeb.

Long-term, the Soyuz-2 is also expected to play a key role in Russia’s plan for a manned lunar mission, which is tentatively scheduled to take place in 2028.

Further Reading: TASS

Making the Trip to Mars Cheaper and Easier: The Case for Ballistic Capture

How long does it take to get to Mars

When sending spacecraft to Mars, the current, preferred method involves shooting spacecraft towards Mars at full-speed, then performing a braking maneuver once the ship is close enough to slow it down and bring it into orbit.

Known as the “Hohmann Transfer” method, this type of maneuver is known to be effective. But it is also quite expensive and relies very heavily on timing. Hence why a new idea is being proposed which would involve sending the spacecraft out ahead of Mars’ orbital path and then waiting for Mars to come on by and scoop it up.

This is what is known as “Ballistic Capture”, a new technique proposed by Professor Francesco Topputo of the Polytechnic Institute of Milan and Edward Belbruno, a visiting associated researcher at Princeton University and former member of NASA’s Jet Propulsion Laboratory.

In their research paper, which was published in arXiv Astrophysics in late October, they outlined the benefits of this method versus traditional ones. In addition to cutting fuel costs, ballistic capture would also provide some flexibility when it comes to launch windows.

MAVEN was launched into a Hohmann Transfer Orbit with periapsis at Earth's orbit and apoapsis at the distance of the orbit of Mars. Credit: NASA
MAVEN was launched into a Hohmann Transfer Orbit with periapsis at Earth’s orbit and apoapsis at the distance of the orbit of Mars. Credit: NASA

Currently, launches between Earth and Mars are limited to period where the rotation between the two planets is just right. Miss this window, and you have to wait another 26 months for a new one to come along.

At the same time, sending a rocket into space, through the vast gulf that separates Earth’s and Mars’ orbit, and then firing thrusters in the opposite direction to slow down, requires a great deal of fuel. This in turn means that the spacecraft responsible for transporting satellites, rovers, and (one day) astronauts need to be larger and more complicated, and hence more expensive.

As Belbruno told Universe Today via email:  “This new class of transfers is very promising for giving a new approach to future Mars missions that should lower cost and risk.  This new class of transfers should be applicable to all the planets. This should give all sorts of new possibilities for missions.”

The idea was first proposed by Belbruno while he was working for JPL, where he was trying to come up with numerical models for low-energy trajectories. “I first came up with the idea of ballistic capture in early 1986 when working on a JPL study called LGAS (Lunar Get Away Special),” he said. “This study involved putting a tiny 100 kg solar electric spacecraft in orbit around the Moon that was first ejected from a Get Away Special Canister on the Space Shuttle.”

The Hiten spacecraft, part of the MUSES Program, was built by the Institute of Space and Astronautical Science of Japan and launched on January 24, 1990. It was Japan's first lunar probe. Credit: JAXA
The Hiten spacecraft, built by the Institute of Space and Astronautical Science of Japan, was Japan’s first lunar probe. Credit: JAXA

The test of the LGAS was not a resounding success, as it would be two years before it got to the Moon. But in 1990, when Japan was looking to rescue their failed lunar orbiter, Hiten, he submitted proposals for a ballistic capture attempt that were quickly incorporated into the mission.

“The time of flight for this one was 5 months,” he said. “It was successfully used in 1991 to get Hiten to the Moon.” And since that time, the LGAS design has been used for other lunar missions, including the ESA’s SMART-1 mission in 2004 and NASA’s GRAIL mission in 2011.

But it is in future missions, which involve much greater distances and expenditures of fuel, that Belbruno felt would most benefit from this method. Unfortunately, the idea met with some resistance, as no missions appeared well-suited to the technique.

“Ever since 1991 when Japan’s Hiten used the new ballistic capture transfer to the Moon, it was felt that finding a useful one for Mars was not possible due to Mars much longer distance and its high orbital velocity about the Sun. However, I was able to find one in early 2014 with my colleague Francesco Topputo.”

Artist's impression of India’s Mars Orbiter Mission (MOM). Credit: ISRO
India’s Mars Orbiter Mission (MOM) was one of the most successful examples of the Hohmann Transfer method. Credit: ISRO

Granted, there are some drawbacks to the new method. For one, a spacecraft sent out ahead of Mars’ orbital path would take longer to get into orbit than one that slows itself down to establish orbit.

In addition, the Hohmann Transfer method is a time-tested and reliable one. One of the most successful applications of this maneuver took place back in September, when the Mars Orbiter Mission (MOM) made its historic orbit around the Red Planet. This not only constituted the first time an Asian nation reached Mars, it was also the first time that any space agency had achieved a Mars orbit on the first try.

Nevertheless, the possibilities for improvements over the current method of sending craft to Mars has people at NASA excited. As James Green, director of NASA’s Planetary Science Division, said in an interview with Scientific American: “It’s an eye-opener. This [ballistic capture technique] could not only apply here to the robotic end of it but also the human exploration end.”

Don’t be surprised then if upcoming missions to Mars or the outer Solar System are performed with greater flexibility, and on a tighter budget.

Further Reading: arXiv Astrophysics