Mr. Fusion? Compact Fusion Reactor Will be Available in 5 Years Says Lockheed-Martin

The Farnsworth Fusor; Pons and Fleishmann. It seems the trail to fusion energy has long gone cold — stone cold, that is, and not cold as in cold fusion. Despite the promise of fusion providing a sustainable and safe energy source, fusion reactors are not a dime a dozen and they won’t be replacing coal fired power plants any time soon. Or will they? Lockheed-Martin Skunk Works announced a prototype compact fusion reactor that could be ready within five years. This revelation has raised eyebrows and sparked moments of enthusiasm.

But, let’s considers this story and where it all fits in both the history and future.

For every Skunk Works project that has made the runway such as the Stealth Fighter or SR-71 Blackbird, there are untold others that never see the light of day. This adds to the surprise and mystery of Lockheed-Martin’s willingness to release images and a detailed narrative describing a compact fusion reactor project. The impact that such a device would have on humanity can be imagined … and at the same time one imagines how much is unimaginable.

Lockheed-Martin engineers in the Skunkworks prepare a vessel, one component of an apparatus that they announced will lead to nuclear fusion in a truck-sized reactor within 5 years. An international effort is underway in Europe to create the worlds first practical tokamak fusion reactor, a much larger and costlier design that has never achieved the long sought "breakeven" point. (Photo Credit: Lockheed-Martin)
Lockheed-Martin engineers in the Skunkworks prepare a vessel, one component of an apparatus that they announced will lead to nuclear fusion in a truck-sized reactor within 5 years. An international effort is underway in Europe to create the world’s first practical tokamak fusion reactor, a much larger and costlier design that has never achieved the long sought “breakeven” point. (Photo Credit: Lockheed-Martin)

The program manager of the Skunk Works’ compact fusion reactor experiment is Tom Maguire. Maguire and his team places emphasis on the turn-around time for modifying and testing the compact fusion device. With the confidence they are expressing in their design and the ability to quickly build, test and modify, they are claiming only five years will be needed to reach a prototype.

What exactly the prototype represents was left unexplained, however. Maguire continues by saying that in 10 years, the device will be seen in military applications and in 20 years it will be delivered to the world as a replacement for the dirty energy sources that are in use today. Military apps at 10 years means that the device will be too expensive initially for civilian operations but such military use would improve performance and lower costs which could lead to the 20 year milestone moment if all goes as planned.

Their system uses magnetic confinement, the same basic principle behind the tokamak toroidal plasma confinement system that has received the greatest attention and government funding for over 50 years.

The ITER Tokamak Fusion Reactor is expected to begin operational testing in 2020 and begin producing deuterium-tritium fusion reactions in 2027. (Credits: ITER, Illus. T.Reyes)
The ITER Tokamak Fusion Reactor is expected to begin operational testing in 2020 and begin producing deuterium-tritium fusion reactions in 2027. (Credits: ITER, Illus. T.Reyes)

The International Thermonuclear Experimental Reactor (ITER) is currently under construction in Europe under the assumption that it will be the first net energy producing fusion generator ever. It is funded by the European Union, India, Japan, People’s Republic of China, Russia, South Korea and the United States. But there are cost over-runs and its price has gone from $5 billion to $50 billion.

ITER is scheduled to begin initial testing in 2019 about the time Lockheed-Martin’s compact fusion reactor prototype is expected. If Lockheed-Martin succeeds in their quest, they will effectively have skunked ITER and laid to waste a $50 billion international effort at likely 1/1000th the cost.

There are a few reasons Lockheed-Martin has gone out on a limb. Consider the potential. One ton of Uranium used in Fission reactors has as much energy as 1,500 tons of coal. But fission reactors produce radioactive waste and are a finite resource without breeder reactors, themselves a nuclear proliferation risk. Fusion produces 3 to 4 times more energy per reaction than fission. Additionally, the fuel — isotopes of hydrogen — is available from sea water — which is nearly limitless — and the byproducts are far less radioactive than with fission. Fusion generators once developed could provide our energy needs for millions of years.

More pragmatically, corporations promote their R&D. They are in a constant state of competition. They present a profile that ranges from the practical to the cutting edge to instill confidence in their Washington coffers. Furthermore, their competitors have high profile individuals and projects. A fusion project demonstrates that Lockheed-Martin is doing more than creating better mouse-traps.

To date, no nuclear fusion reactor has achieved breakeven. This is when the fusion device outputs as much energy as is input to operate it. Magnetic confinement such as the various tokamak designs, Lawrence Livermore’s laser-based inertial confinement method, and even the simple Philo Farnsworth Fusor can all claim to be generating energy from fusion reactions. They are just all spending more energy than their devices output.

An example of a homemade Fusor. Originally invented in the 1960s by the inventor of the television, Philo Farnsworth. (Credit: Wikipedia, W.Jack)
An example of a homemade Fusor. Originally invented in the 1960s by the inventor of the television, Philo Farnsworth. (Credit: Wikipedia, W.Jack)

The fusor, invented in the 1960s by Farnsworth and Hirsh, is a electrostatic plasma confinement system. It uses electric fields to confine and accelerate ions through a central point at which some ions will collide with sufficient energy to fuse. Although the voltage needed is readily achieved by amateurs – about 4000 volts – not uncommon in household devices, no fusor has reached breakeven and theoretically never will. The challenge to reaching breakeven involves not just energy/temperature but also plasma densities. Replicating conditions that exist in the core of stars in a controllable way is not easy. Nevertheless, there is a robust community of “fusioneers” around the world and linked by the internet.

Mr Fusion, the compact fusion reactor that drove the 21st Century version of the DeLorian in Back to the Future. The movie trilogy grossed $1 billion at the box office. Mr Fusion could apparently function off of any water bearing material. (Credit: Universal Pictures)
Mr Fusion, the compact fusion reactor that drove the 21st Century version of the DeLorean in Back to the Future. The movie trilogy grossed $1 billion at the box office. Mr Fusion could apparently function off of any water bearing material. (Credit: Universal Pictures)

It remains to be seen who, what and when a viable fusion reactor will be demonstrated. With Lockheed-Martin’s latest announcement, once again, fusion energy is “just around the corner.” But many skeptics remain who will quickly state that commercial fusion energy remains 50 years in the future. So long as Maguire’s team meets milestones with expected performance improvements, their work will go on. The potential of fusion energy remains too great to dismiss categorically.

Source: Lockheed-Martin Products Page, Compact Fusion

When Will We Become Interstellar?

Dr. Ian O’Neill is one of the coolest scientists we know, so we sat him down at the YouTube spaces and asked him a real zinger – when will we humans become an interstellar race, like the ones we’re used to seeing on Star Trek? Here’s what he had to say to us!
Continue reading “When Will We Become Interstellar?”

Navigating the Solar System Using Pulsars as GPS

Picture the scene: It’s the not too distant future and humanity has started to construct colonies and habitats all across our solar system. We’re gearing up to take that next big step into the unknown – actually leaving the cozy protection of the Sun’s heliosphere and venturing into interstellar space. Before this future can happen, however, there’s an important thing which is often overlooked in discussions on this subject.

Navigation.

Just as sailors once used the stars to navigate the sea, space travelers may be able to use the stars to navigate the solar system. Except that this time, the stars we’d use will be dead ones. A specific class of neutron stars known as pulsars, defined by the repeated pulses of radiation they emit. The trick, according to a recent paper, may be to use pulsars as a form of interplanetary – and possibly even interstellar – GPS.

Theories and ideas on spacecraft engines are plentiful. Foundations such as Icarus Interstellar keenly advocate the development of new propulsion systems, with some systems such as the VASIMR thrusters appearing rather promising. Meanwhile, fusion rockets are expected to be able to take passengers on a round trip from Earth to Mars in just 30 days, and researchers elsewhere are working on real life warp drives, not unlike the ones we all know and love from the movies.

Interplanetary GPS

For Voyager 2, out on the edge of our Solar system, conventional navigation methods don't work too well. Credit: NASA
For Voyager 2, out on the edge of our Solar system, conventional navigation methods don’t work too well. Credit: NASA

But navigation is just as important. After all, space is mind-meltingly vast and mostly empty. The prospect of getting lost out in the emptiness is, frankly, terrifying.

To date, this hasn’t really been a problem, particularly seeing as we’ve only sent a small handful of craft past Mars. As a result, we currently use a messy mishmash of techniques to keep track of spacecraft from here on Earth – essentially tracking them with telescopes while relying heavily on their planned trajectory. This is also only as accurate as our instruments here on Earth are, meaning that as a craft gets more distant, our idea of where exactly it is becomes increasingly less accurate.

This is all well and good when we only have a few craft to track, but when space travel becomes more easily attainable and human passengers are involved, routing everything through Earth will start to become more and more difficult. This is particularly the case if we’re planning on leaving the confines of our home star – Voyager 2 is presently over 14 light hours away, meaning that Earth-based transmissions take over half a day to reach it.

Navigating Earth with modern technology is quite simple thanks to the array of GPS satellites we have in orbit around our world. Those satellites are constantly transitting signals which are, in turn, received by the GPS unit you may have on your car dashboard or in your pocket. As with all other electromagnetic transmissions, those signals travel at the speed of light, giving a slight delay between when they were transmitted and when they’re received. By using the signals from 4 or more satellites and timing those delays, a GPS unit can pinpoint your location on the surface of Earth with remarkable accuracy.

The Icarus Pathfinder starship passing by Neptune. Credit: Adrian Mann
The Icarus Pathfinder starship passing by Neptune. Credit: Adrian Mann

The pulsar navigation system proposed by Werner Becker, Mike Bernhardt, and Axel Jessner at the Max Planck Institute, works in a very similar way, using the pulses emitted by pulsars. By knowing the initial position and velocity of your spacecraft, recording those pulses, and treating the Sun as a fixed reference point, you can calculate your exact location inside the solar system.

Considering the Sun to be fixed this way is technically referred to as an inertial reference frame, and if you compensate for the motion of the Sun through our galaxy, the system still works perfectly well when leaving the Solar system! All you need is to keep track of a minimum of 3 pulsars (ideally 10, for the most accurate results), and you can pinpoint your location with surprising accuracy!

Interestingly enough, the idea of using pulsars as navigation beacons dates all the way back to 1974, notably not long after Carl Sagan had used pulsars to show Earth’s location on the plaques attached to the Pioneer 10 and 11 space probes. If Project Daedalus had ever been constructed, it might have been equipped with a system not unlike the one described here.

Packing for long haul

Becker and his colleagues looked at the different types of pulsar visible in the sky, and picked out a type known as rotation-powered pulsars as the best type to use for a galactic positioning system. In particular, a sub-type of these known as millisecond pulsars are ideal. Being older than most pulsars they have weak magnetic fields, meaning they take a long time to slow down their spin rates – helpful as strongly magnetised pulsars can sometimes change their rotation speed without warning.

An x-ray image of the Vela pulsar, one of the brightest known millisecond pulsars. Credit: NASA/CXC/PSU/G.Pavlov et al.
An x-ray image of the Vela pulsar, one of the brightest known millisecond pulsars. Credit: NASA/CXC/PSU/G.Pavlov et al.

With countless pulsars to choose from, the question turns to how you might equip your spacecraft to track them. Pulsars are easiest to spot in either x-rays or radio waves, so there’s a little choice as to which may be better to use. Essentially, it all turns out to be a question of how large your spacecraft is.

Smaller vehicles, more akin to modern spacecraft, would be best off using x-rays to track pulsars. X-ray mirrors, like the ones used in certain orbiting space telescopes are compact and lightweight, meaning that a few could be added for a navigation system without increasing the overall mass of the craft all that much. They may have the minor disadvantage that they may be easily damaged by an x-ray source which is too bright, this wouldn’t be a problem except under some unfortunate circumstances.

On the other hand, if you’re piloting a large space ship between planets or even stars, you would likely be better using radio waves. In radio frequencies, we know a lot more about the way in which pulsars work, as well as being able to measure them with a higher degree of accuracy. The only drawback there is that the radio telescopes you’d need to install on your ship would require an area of at least 150 m². But then, if you happened to be flying a starship, that kind of size probably wouldn’t make much difference.

It’s interesting to bear in mind the way that astronomers frequently use the analogy of pulsars being “like lighthouses” when explaining why they appear to pulse. If we someday find ourselves using them as actual navigation aids, that analogy may take on a whole new meaning!

You can read the team’s paper here.

The Icarus Starfinder, shown leaving the Solar system. Ships like this may be equipped with a pulsar navigation system. Credit: Adrian Mann
The Icarus Starfinder, shown leaving the Solar system. Ships like this may be equipped with a pulsar navigation system. Credit: Adrian Mann

Images are used here with kind permission from Adrian Mann of Icarus Interstellar, whose full gallery is viewable online at bisbos.com

What is Interstellar Space?

Glittering Metropolis of Stars

[/caption]

The boundary of what is known, that place known as the great frontier, has always intrigued and enticed us. The mystery of the unknown, the potential for discovery, the fear, the uncertainty; that place that exists just beyond the edge has got it all! At one time, planet Earth contained many such places for explorers, vagabonds and conquerors. But unfortunately, we’ve run out of spaces to label “here be dragons” here at home. Now, humanity must look to the stars to find such places again. These areas, the vast stretches of space that fall between the illuminated regions where stars sit, is what is known as Interstellar Space. It can be the space between stars but also can refer to the space between galaxies.

On the whole, this area of space is defined by its emptiness. That is, there are no stars or planetary bodies in these regions that we know of. That does not mean, however, that there is absolutely nothing there. In fact, interstellar areas do contain quantities of gas, dust, and radiation. In the first two cases, this is what is known as interstellar medium (or ISM), the matter that fills interstellar space and blends smoothly into the surrounding intergalactic space. The energy that occupies the same volume, in the form of electromagnetic radiation, is known as the interstellar radiation field. On the whole, the ISM is thought to be made up primarily of plasma (aka. ionized hydrogen gas) because its temperature appears to be high by terrestrial standards.

The nature of the interstellar medium has received the attention of astronomers and scientists over the centuries. The term first appeared in print in the 17th century in the works of Sir Francis Bacon and Robert Boyle, both of whom were referring to the spaces that fell between stars. Before the development of electromagnetic theory, early physicists believed that space must be filled with an invisible “aether” in order for light to pass through it. It was not until the 20th century though that deep photographic imaging and spectroscopy that scientists were able to postulate that matter and gas existed in these regions. The discovery of cosmic waves in 1912 was a further boon, leading to the theory that interstellar space was pervaded by them. With the advent of ultraviolet, x-ray, microwave, and gamma ray detectors, scientists have been able to “see” these kinds of energy at work in interstellar space and confirm their existence.

Many satellites have been launched with the intention of sending back information from interstellar space. These include the Voyager 1 and 2 spacecraft which have cleared the known boundaries of the Solar System and passed into the heliopause. They are expected to continue to operate for the next 25 to 30 years, sending back data on magnetic fields and interstellar particles.

We have written many articles about interstellar space for Universe Today. Here’s an article about deep space, and here’s an article about interstellar space travel.

If you’d like more information on the Interstellar Space, here’s a link to Voyager’s Interstellar Mission Page, and here’s the homepage for Interstellar Science.

We’ve recorded an episode of Astronomy Cast all about Interstellar Travel. Listen here, Episode 145: Interstellar Travel.

Sources:
http://en.wikipedia.org/wiki/Interstellar_space#Interstellar
http://en.wikipedia.org/wiki/Interstellar_medium
http://www.seasky.org/solar-system/interstellar-space.html
http://en.wikipedia.org/wiki/Electromagnetic_radiation
http://en.wikipedia.org/wiki/Heliopause#Heliopause

Probing Exoplanets

Sometimes topics segue perfectly. With the recent buzz about habitable planets, followed by the raining on the parade articles we’ve had about the not insignificant errors in the detections of planets around Gliese 581 as well as finding molecules in exoplanet atmospheres, it’s not been the best of times for finding life. But in a comment on my last article, Lawrence Crowell noted: “You can’t really know for sure whether a planet has life until you actually go there and look on the ground. This is not at all easy, and probably it is at best possible to send a probe within a 25 to 50 light year radius.”

This is right on the mark and happens to be another topic that’s been under some discussion on arXiv recently in a short series of paper and responses. The first paper, accepted to the journal Astrobiology and led by Jean Schneider of the Observatory of Paris-Meudon, seeks to describe “the far future of exoplanet direct characterization”. In general, this paper discusses where the study of exoplanets could go from our current knowledge base. It proposes two main directions: Finding more planets to better survey the parameter space planets inhabit, or more in depth, long-term studying of the planets we do know.

But perhaps the more interesting aspect of the paper, and the one that’s generated a rare response, is what can be done should we detect a planet with promising characteristics relatively nearby. They first propose trying to directly image the planet’s surface and calculate the diameter of a telescope capable of doing so would be roughly half as large as the sun. Instead, if we truly wish to get a direct image, the best bet would be to go there. They quickly address a few of the potential challenges.

The first is that of cosmic rays. These high energy particles can wreak havoc on electronics. The second is simple dust grains. The team calculates that an impact with “a 100 micron interstellar grain at 0.3 the speed of light has the same kinetic energy than a 100 ton body at 100 km/hour”. With present technology, any spacecraft equipped with sufficient shielding would be prohibitively massive and difficult to accelerate to the velocities necessary to make the trip worthwhile.

But Ian Crawford, of the University of London, thinks that the risk posed by such grains may be overstated. Firstly, Crawford believes Schneider’s requirement of 30% of the speed of light is somewhat overzealous. Instead, most proposals of interstellar travel by probes generally use a value of 10% of the speed of light. In particular, the most exhaustive proposal yet created, (the Daedalus project) only attempted to achieve a velocity of 0.12c. However, the ability to produce such a craft was well beyond the means at the time. But with the advent of miniaturization of many electronic components, the prospect may need to be reevaluated.

Aside from the overestimate on necessary velocities, Crawford suggests that Schneider’s team overstated the size of dust grains. In the solar neighborhood, dust grains are estimated to be nearly 100 times smaller than reported by Schneider’s team. The combination of the change in size estimation and that of velocity takes the energy released on collision from a whopping 4 x 107 Joules, to a mere 4.5 Joules. At absolute largest, recent studies have shown that the upper limit for dust particles is more in the range of 4.5 micrometers.

Lastly, Crawford suggests that there may be alternative ways to offer shielding than the brute force wall of mass. If a spacecraft were able to detect incoming particles using radar or another technique, it is possible that it could destroy the incoming particles using lasers, or deflect it using a electromagnetic field.

But Schneider wasn’t finished. He issued a response to Crawford’s response. In it, he criticizes Crawford’s optimistic vision of using nuclear or anti-matter propulsion systems. He notes that, thus far, nuclear propulsion has only been able to produce short impulses instead of continuous thrust and that, although some electronics have been miniaturized, the best analogue yet developed, the National Ignition Facility, is, “with all its control and cooling systems, is presently quite a non-miniaturized building.”

Anti-matter propulsion may be even more difficult. Currently, our ability to produce anti-matter is severely limited. Schneider estimates that it would take 200 terrawatts of energy to produce the required amounts. Meanwhile, the overall energy of the entire Earth is only 20 terrawatts.

In response to the charge of overestimation, Schneider notes that, although such large dust grains would be rare, but “even two lethal or severe collisions are prohibitory”, but does not go on to make any honest estimations of what the actual probability of such a collision would be.

Ultimately, Schneider concludes that all discussion is, at best, extremely preliminary. Before any such undertaking would be seriously considered, it would require “a precursor mission to secure the technological concept, including shielding mechanisms, at say 500 to 1000 Astronomical Units.” Ultimately, Schneider and his team seems to remind us that the technology is not yet there and that there are legitimate threats we must address. Crawford, on the other hand suggests that some of these challenges are ones that we may already be well on the road to addressing and constraining.