Instead of Building Single Monster Scopes like James Webb, What About Swarms of Space Telescopes Working Together?

In the coming decade, a number of next-generation instruments will take to space and begin observing the Universe. These will include the James Webb Space Telescope (JWST), which is likely to be followed by concepts like the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR), the Origins Space Telescope (OST), the Habitable Exoplanet Imager (HabEx) and the Lynx X-ray Surveyor.

These missions will look farther into the cosmos than ever before and help astronomers address questions like how the Universe evolved and if there is life in other star systems. Unfortunately, all these missions have two things in common: in addition to being very large and complex, they are also very expensive. Hence why some scientists are proposing that we rely on more cost-effective ideas like swarm telescopes.

Two such scientists are Jayce Dowell and Gregory B. Taylor, a research assistant professor and professor (respectively) with the Department of Physics and Astronomy at the University of New Mexico. Together, the pair outlined their idea in a study titled “The Swarm Telescope Concept“, which recently appeared online and was accepted for publication by the Journal of Astronomical Instrumentation.

Illustration of NASA’s James Webb Space Telescope. Credits: NASA

As they state in their study, traditional astronomy has focused on the construction, maintenance and operation of single telescopes. The one exception to this is radio astronomy, where facilities have been spread over an extensive geographic area in order to obtain high angular resolution. Examples of this include the Very Long Baseline Array (VLBA), and the proposed Square Kilometer Array (SKA).

In addition, there’s also the problem of how telescopes are becoming increasingly reliant on computing and digital signal processing. As they explain in their study, telescopes commonly carry out multiple simultaneous observation campaigns, which increases the operational complexity of the facility due to conflicting configuration requirements and scheduling considerations.

A possible solution, according to Dowell and Taylor, is to rethink telescopes. Instead of a single instrument, the telescope would consist of a distributed array where many autonomous elements come together through a data transport system to function as a single facility. This approach, they claim, would be especially useful when it comes to the Next Generation Very Large Array (NGVLA) – a future interferometer that will build on the legacy of the Karl G. ansky Very Large Array and Atacama  Large Millimeter/submillimeter Array (ALMA). As they state in their study:

“At the core of the swarm telescope is a shift away from thinking about an observatory as a monolithic entity. Rather, an observatory is viewed as many independent parts that work together to accomplish scientific observations. This shift requires moving part of the decision making about the facility away from the human schedulers and operators and transitioning it to “software defined operators” that run on each part of the facility. These software agents then communicate with each other and build dynamic arrays to accomplish the goals of multiple observers, while also adjusting for varying observing conditions and array element states across the facility.”

This idea for a distributed telescope is inspired by the concept of swarm intelligence, where large swarms of robots  are programmed to interact with each other and their environment to perform complex tasks. As they explain, the facility comes down to three major components: autonomous element control, a method of inter-element communication, and data transport management.

Of these components, the most critical is the autonomous element control which governs the actions of each element of the facility. While similar to traditional monitoring and control systems used to control individual robotic telescopes, this system would be different in that it would be responsible for far more. Overall, the element control would be responsible for ensuring the safety of the telescope and maximizing the utilization of the element.

“The first, safety of the element, requires multiple monitoring points and preventative actions in order to identify and prevent problems,” they explain. “The second direction requires methods of relating the goals of an observation to the performance of an element in order to maximize the quantity and quality of the observations, and automated methods of recovering from problems when they occur.”

The second component, inter-element communication, is what allows the individual elements to come together to form the interferometer. This can take the form of a leaderless system (where there is no single point of control), or an organizer system, where all of the communication between the elements and with the observation queue is done through a single point of control (i.e. the organizer).

Long Wavelength Array, operated by the University of New Mexico. Credit: phys.unm.edu

Lastly, their is the issue of data transport management, which can take one of two forms based on existing telescopes. These include fully 0ff-line systems, where correlation is done post-observation – used by the Very Long Baseline Array (VLBA) – to fully-connected systems, where correlation is done in real-time (as with the VLA).  For the sake of their array, the team emphasized how connectivity and correlation are a must.

After considering all these components and how they are used by existing arrays, Dowell and Taylor conclude that the swarm concept is a natural extension of the advances being made in robotic and thinking telescopes, as well as interferometry. The advantages of this are spelled out in their conclusions:

“It allows for more efficient operations of facilities by moving much of the daily operational work done by humans to autonomous control systems. This, in turn, frees up personnel to focus on the scientific output of the telescope. The swarm concept can also combine the unused resources of the different elements together to form an ad hoc array.”

In addition, swarm telescopes will offer new opportunities and funding since they will consist of small elements that can be owned and operated by different entities. In this way, different organizations would be able to conduct science with their own elements while also being able to benefit from large-scale interferometric observations.

Graphic depiction of Modular Active Self-Assembling Space Telescope Swarms
Credit: D. Savransky

This concept is similar to the Modular Active Self-Assembling Space Telescope Swarms, which calls for a swarm of robots that would assemble in space to form a 30 meter (~100 ft) telescope. The concept was proposed by a team of American astronomers led by Dmitri Savransky, an assistant professor of mechanical and aerospace engineering at Cornell University.

This proposals was part of the 2020 Decadal Survey for Astrophysics and was recently selected for Phase I development as part of the 2018 NASA Innovative Advanced Concepts (NIAC) program. So while many large-scale telescopes will be entering service in the near future, the next-next-generation of telescopes could include a few arrays made up of swarms of robots directed by artificial intelligence.

Such arrays would be capable of achieving high-resolution astronomy and interferometry at lower costs, and could free up large, complex arrays for other observations.

Further Reading: arXiv

Engineers Propose a Rocket that Consumes Itself as it Flies to Space

When it comes to the new era of space exploration, one of the primary focuses has been on cutting costs. By reducing the costs associated with individual launches, space agencies and private aerospace companies will not only be able to commercialize Low Earth-Orbit (LEO), but also mount far more in the way of exploration missions and maybe even colonize space.

Several methods have been proposed so far for reducing launch costs, which include reusable rockets and single-stage-to-orbit rockets. However, a team of engineers from the University of Glasgow and the Ukraine recently proposed an entirely different idea that could make launching small payloads affordable – a self-eating rocket! This “autophage” rocket could easily send small satellites into space more easily and more affordably.

The study which describes how they built and tested the “autophage” engine recently appeared in the Journal of Spacecraft and Rockets under the title “Autophage Engines: Toward a Throttleable Solid Motor“. The team was led by Vitaly Yemets and Patrick Harkness – a Professor from the Oles Honchar Dnipro National University in the Ukraine and a Senior Lecturer from the University of Glasgow, respectively.

The autophage engine, being tested at the Dnipro testing lab in the Ukraine. Credit: University of Glasgow

Together, the team addressed one the most pressing issues when it comes to rockets today. This has to do with the fact that storage tanks, which contain the rocket’s propellants as they climb, weight many times the spacecraft’s payload. This reduces the efficiency of the launch vehicle and also adds to the problem of space debris, since these fuel tanks are disposable and fall away when spent.

As Dr Patrick Harkness, who led Glasgow’s contribution to the work, explained in a recent University of Glasgow press release:

“Over the last decade, Glasgow has become a centre of excellence for the UK space industry, particularly in small satellites known as ‘CubeSats’, which provide researchers with affordable access to space-based experiments. There’s also potential for the UK’s planned spaceport to be based in Scotland. However, launch vehicles tend to be large because you need a large amount of propellant to reach space. If you try to scale down, the volume of propellant falls more quickly than the mass of the structure, so there is a limit to how small you can go. You will be left with a vehicle that is smaller but, proportionately, too heavy to reach an orbital speed.”

In contrast, an autophage engine consumes its own structure during ascent, so more cargo capacity could be freed-up and less debris would enter orbit. The propellant consists of a solid fuel rod (made of a solid plastic like polyethylene) on the outside and an oxidizer on the inside. By driving the rod into a hot engine, the fuel and oxidizer are vaporized to create gas that then flows into the combustion chamber to produce thrust.

The use of autophage engines on rockets could allow for the deployment of small satellites cheaply and efficiently, without adding to the problem of space debris. Credit: AMNH.

“A rocket powered by an autophage engine would be different,” said Dr. Harkness. “The propellant rod itself would make up the body of the rocket, and as the vehicle climbed the engine would work its way up, consuming the body from base to tip. That would mean that the rocket structure would actually be consumed as fuel, so we wouldn’t face the same problems of excessive structural mass. We could size the launch vehicles to match our small satellites, and offer more rapid and more targeted access to space.”

The research team also showed that the engine could be throttled by simply varying the speed at which the rod is driven into the engine, which is something rare in a solid motor. During the lab tests, the team has been able to sustain rocket operations for 60 seconds at a time. As Dr. Harkness said, the team hopes to build on this and eventually conduct a launch test:

“While we’re still at an early stage of development, we have an effective engine testbed in the laboratory in Dnipro, and we are working with our colleagues there to improve it still further. The next step is to secure further funding to investigate how the engine could be incorporated into a launch vehicle.”

Another challenge of the modern space age is how to deliver additional payloads and satellites into orbit without creating more in the way of orbital clutter. By introducing an engine that can make for cheap launches that also has no disposable parts, the autophage could be a game-changing technology, one which is right up there with fully-recoverable rockets.

The research team also consisted of Mykola Dron and Anatoly Pashkov – a Professor and Senior Researcher from Oles Honchar Dnipro National University – and Kevin Worrall and Michael Middleton – a Research Associate and M.S. student from the University of Glasgow.

Further Reading: University of Glasgow, Journal of Spacecraft and Rockets

 

Uh oh, the EMDrive Could be Getting Its “Thrust” From Cables and Earth’s Magnetic Field

Ever since NASA announced that they had created a prototype of the controversial Radio Frequency Resonant Cavity Thruster (aka. the EM Drive), any and all reported results have been the subject of controversy. Initially, any reported tests were the stuff of rumors and leaks, the results were treated with understandable skepticism. Even after the paper submitted by the Eagleworks team passed peer review, there have still been unanswered questions.

Hoping to address this, a team of physicists from TU Dresden – known as the SpaceDrive Project – recently conducted an independent test of the EM Drive. Their findings were presented at the 2018 Aeronautics and Astronautics Association of France’s Space Propulsion conference, and were less than encouraging. What they found, in a nutshell, was that much of the EM’s thrust could attributable to outside factors.

The results of their test were reported in a study titled “The SpaceDrive Project – First Results on EMDrive and Mach-Effect Thrusters“, which recently appeared online. The study was led by Martin Tajmar, an engineer from the Institute of Aerospace Engineering at TU Dresden, and included TU Dresden scientists Matthias Kößling, Marcel Weikert and Maxime Monette.

EMDrive Thruster: Cavity (Left), Antenna (Middle) and On Balance (Right). Credit: Martin Tajmar, et al.

To recap, the EM Drive is a concept for an experimental space engine that came to the attention of the space community years ago. It consists of a hollow cone made of copper or other materials that reflects microwaves between opposite walls of the cavity in order to generate thrust. Unfortunately, this drive system is based on principles that violate the Conservation of Momentum law.

This law states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces. Since the EM Drive involves electromagnetic microwave cavities converting electrical energy directly into thrust, it has no reaction mass. It is therefore “impossible”, as far as conventional physics go.

As a result, many scientists have been skeptical about the EM Drive and wanted to see definitive evidence that it works. In response, a team of scientists at NASA’s Eagleworks Laboratories began conducting a test of the propulsion system. The team was led by Harold White, the Advanced Propulsion Team Lead for the NASA Engineering Directorate and the Principal Investigator for NASA’s Eagleworks lab.

Despite a report that was leaked in November of 2016 – titled “Measurement of Impulsive Thrust from a Closed Radio Frequency Cavity in Vacuum“ – the team never presented any official findings. This prompted the team led by Martin Tajmar to conduct their own test, using an engine that was built based on the same specifications as those used by the Eagleworks team.

According to tests conducting by a team from TU Dresden, the EM Drive’s thrust may be the result of interaction with Earth’s magnetic field. Credit: ESA/ATG medialab

In short, the TU Dresden team’s prototype consisted of a cone-shaped hollow engine set inside a highly shielded vacuum chamber, which they then fired microwaves at. While they found that the EM Drive did experience thrust, the detectable thrust may not have been coming from the engine itself. Essentially, the thruster exhibited the same amount of force regardless of which direction it was pointing.

This suggested that the thrust was originating from another source, which they believe could be the result of interaction between engine cables and the Earth’s magnetic field. As they conclude in their report:

“First measurement campaigns were carried out with both thruster models reaching thrust/thrust-to– power levels comparable to claimed values. However, we found that e.g. magnetic interaction from twisted-pair cables and amplifiers with the Earth’s magnetic field can be a significant error source for EMDrives. We continue to improve our measurement setup and thruster developments in order to finally assess if any of these concepts is viable and if it can be scaled up.”

In other words, the mystery thrust reported by previous experiments may have been nothing more than an error. If true, it would explain how the “impossible EM Drive” was able to achieve small amounts of measurable thrust when the laws of physics claim it shouldn’t be. However, the team also emphasized that more testing will be needed before the EM Drive can be dismissed or validated with confidence.

What will it take before human beings can travel to the nearest star system within their own lifetimes? Credit: Shigemi Numazawa/ Project Daedalus

Alas, it seems that the promise of being able to travel to the Moon in just four hours, to Mars in 70 days, and to Pluto in 18 months – all without the need for propellant – may have to wait. But rest assured, many other experimental technologies are being tested that could one day allow us to travel within our Solar System (and beyond) in record time. And additional tests will be needed before the EM Drive can be written off as just another pipe dream.

The team also conducted their own test of the Mach-Effect Thruster, another concept that is considered to be unlikely by many scientists. The team reported more favorable results with this concept, though they indicated that more research is needed here as well before anything can be conclusively said. You can learn more about the team’s test results for both engines by reading their report here.

And be sure to check out this video by Scott Manley, who explains the latest test and its results

Further Reading: ResearchGate, Phys.org

Pros and Cons of Various Methods of Interstellar Travel

It’s a staple of science fiction, and something many people have fantasized about at one time or another: the idea of sending out spaceships with colonists and transplanting the seed of humanity among the stars. Between discovering new worlds, becoming an interstellar species, and maybe even finding extra-terrestrial civilizations, the dream of  spreading beyond the Solar System is one that can’t become reality soon enough!

For decades, scientists have contemplated how humanity might one-day reach achieve this lofty goal. And the range of concepts they have come up with present a whole lot of pros and cons. These pros and cons were raised in a recent study by Martin Braddock, a member of the Mansfield and Sutton Astronomical Society, a Fellow of the Royal Society of Biology, and a Fellow of the Royal Astronomical Society.

The study, titled “Concepts for Deep Space Travel: From Warp Drives and Hibernation to World Ships and Cryogenics“, recently appeared in the scientific journal Current Trends in Biomedical Engineering and Biosciences (a Juniper Journals publication). As Braddock indicates in his study, the question of how human beings could explore neighboring star systems has become more relevant in recent years thanks to exoplanet discoveries.

A list of some of the recently-discovered potentially habitable exoplanets. Credit: hpcf.upr.edu

As we reviewed in a previous article, “How Long Would it Take to Travel to the Nearest Star?“, there are numerous proposed and theoretical ways to travel between our Solar System and other stars in the galaxy. However, beyond the technology involved, and the time it would take, there are also the biological and psychological implications for human crews that would need to be taken into account beforehand.

And thanks to the way public interest in space exploration has become renewed in recent years, cost-benefit analyses of all the possible methods is becoming increasingly necessary. As Dr. Braddock told Universe Today via email|:

“Interstellar travel has become more relevant because of the concerted effort to find ways across all of the space agencies to maintain human health in ‘short’ (2-3 yr) space travel. With Mars missions reasonably in sight, Stephen Hawking’s death highlighting one his many beliefs that we should colonize deep space and Elon Musk’s determination to minimize waste on space travel, together with reborn visions of ‘bolt-on’ accessories to the ISS (the Bigelow expandable module) conjures some imaginative concepts.”

All told, Dr. Braddock considers five principle means for mounting crewed missions to other star systems in his study. These include super-luminal (aka/ FTL) travel, hibernation or stasis regimes, negligible senescence (aka. anti-aging) engineering, world ships capable of supporting multiple generations of travellers (aka. generation ships), and cyogenic freezing technologies.

Artist’s concept of a spacecraft using an Alcubierre Warp Drive. Credit: NASA

For FTL travel, the advantages are obvious, and while it remains entirely theoretical at this point, there are concepts being investigated today. A notable FTL concept – known as the Alcubierre Warp Drive – is currently being researched by multiple organizations, which includes the Tau Zero Foundation and the Advanced Propulsion Physics Laboratory: Eagleworks (APPL:E) at NASA’s Johnson Space Center.

To break it down succinctly, this method of space travel involves stretching the fabric of space-time in a wave which would (in theory) cause the space ahead of a ship to contract and the space behind it to expand. The ship would then ride this region, known as a “warp bubble”, through space. Since the ship is not moving within the bubble, but is being carried along as the region itself moves, conventional relativistic effects such as time dilation would not apply.

As Dr. Brannock indicates, the advantages of such a propulsion system include being able to achieve “apparent” FTL travel without violating the laws of Relativity. In addition, a ship traveling in a warp bubble would not have to worry about colliding with space debris, and there would be no upper limit to the maximum speed attainable. Unfortunately, the downsides of this method of travel are equally obvious.

These include the fact that there is currently no known methods for creating a warp bubble in a region of space that does not already contain one. In addition, extremely high energies would be required to create this effect, and there is no known way for a ship to exit a warp bubble once it has entered. In short, FTL is a purely theoretical concept for the time being and there are no indications that it will move from theory to practice in the near future.

“The first [strategy] is FTL travel, but the other strategies accept that FTL travel is very theoretical and that one option is to extend human life or to engage in multiple-generational voyages,” said Dr. Braddock. “The latter could be achieved in the future, given the willingness to design a large enough craft and the propulsion technology development to achieve 0.1 x c.”

In other words, the most plausible concepts for interstellar space travel are not likely to achieve speeds of more than ten percent the speed of light about 29,979,245.8 m / s (~107,925,285 km/h; 67,061,663 mph). This is still a very tall order considering that the fastest mission to date was the Helios 2 mission, which achieved a a maximum velocity of over 66,000 m/s (240,000 km/h; 150,000 mph). Still, this provides a more realistic framework to work within.

Where hibernation and stasis regiments are concerned, the advantages (and disadvantages) are more immediate. For starters, the technology is realizable and has been extensively studies on shorter timescales for both humans and animals. In the latter case, natural hibernation cycles provide the most compelling evidence that hibernation can last for months without incident.

The downsides, however, come down to all the unknowns. For example, there are the likely risks of tissue atrophy resulting from extended periods of time spent in a microgravity environment. This could be mitigated by artificial gravity or other means (such as electrostimulation of muscles), but considerable clinical research is needed before this could be attempted. This raises a whole slew of ethical issues, since such tests would pose their own risks.

Strategies for Engineered Negligible Senescence (SENS) are another avenue, offering the potential for human beings to counter the effects of long-duration spaceflight by reversing the aging process. In addition to ensuring that the same generation that boarded the ship would be the one to make it to its destination, this technique also has the potential to drive stem cell therapy research here on Earth.

However, in the context of long-duration spaceflight, multiple treatments (or continuous ones throughout the travel process) would likely be necessary to achieve full rejuvenation. A considerable amount of research would also be needed beforehand in order to test the process and address the individual components of aging, once again leading to a number of ethical issues.

Then there’s worldships (aka. generation ships), where self-contained and self sustaining spacecraft large enough to accommodate several generations of space travelers would be used. These ships would rely on conventional propulsion and therefore take centuries (or millennia) to reach another star system. The immediate advantages of this concept is that it would fulfill two major goals of space exploration, which would be to maintain a human colony in space and to permit travel to a potentially-habitable exoplanet.

In addition, a generation ship would rely on propulsion concepts that are currently feasible, and a crew of thousands would multiply the chances of successfully colonizing another planet. Of course, the cost of constructing and maintaining such large spaceships would be prohibitive. There are also the moral and ethical challenges of sending human crews into deep space for such extended periods of time.

For instance, is there any guarantee that the crew wouldn’t all go insane and kill each other? And last, there is the fact that newer, more advanced ships would be developed on Earth in the meantime. This means that a faster ship, which would depart Earth later, would be able to overtake a generation ship before it reached another star system. Why spend so much on a ship when it’s likely to become obsolete before it even makes it to its destination?

A concept for a multi-generation ship being designed by the TU Delft Starship Team (DSTART), with support from the ESA. Credit and Copyright: Nils Faber & Angelo Vermeulen

Last, there is cryogenics, a concept that has been explored extensively in the past few decades as a possible means for life-extension and space travel. In many ways, this concept is an extension of hibernation technology, but benefits from a number of recent advancements. The immediate advantage of this method is that it accounts for all the current limitations imposed by technology and a relativistic Universe.

Basically, it doesn’t matter if FTL (or speeds beyond 0.10 c) are possible or how long a voyage will take since the crew will be asleep and perfectly preserved for the duration. On top of that, we already know the technology works, as demonstrated by recent advancements where organ tissues and even whole organisms were warmed and vitrified after being cryogenically frozen.

However, the risks also greater than with hibernation. For instance, the long-term effects of cryogenic freezing on the physiology and central nervous system of higher-order animals and humans is not yet known. This means that extensive testing and human trials would be needed before it was ever attempted, which once again raises a number of ethical challenges.

In the end, there are a lot of unknowns associated with any and all potential methods of interstellar travel. Similarly, much more research and development is necessary before we can safely say which of them is the most feasible. In the meantime, Dr. Braddock admits that it’s much more likely that any interstellar voyages will involve robotic explorers using telepresence technology to show us other worlds – though these don’t possess the same allure.

Project Starshot, an initiative sponsored by the Breakthrough Foundation, is intended to be humanity’s first interstellar voyage. Credit: breakthroughinitiatives.org

“Almost certainly, and this revisits the early concept of von Neumann replication probes (minus the replication!),” he said. “Cube Sats or the like may well achieve this goal but will likely not engage the public imagination nearly as much as human space travel. I believe Sir Martin Rees has suggested the concept of a semi-human AI type device… also some way off.”

Currently, there is only one proposed mission for sending an interstellar space craft to a nearby star system. This would be Breakthrough Starshot, a proposal to send a laser sail-driven nanocraft to Alpha Centauri in just 20 years. After being accelerated to 4,4704,000 m/s (160,934,400 km/h; 100 million mph) 20% the speed of light, this craft would conduct a flyby of Alpha Centauri and also be able to beam home images of Proxima b.

Beyond that, all the missions that involve venturing to the outer Solar System consist of robotic orbiters and probes and all proposed crewed missions are directed at sending astronauts back to the Moon and on to Mars. Still, humanity is just getting started with space exploration and we certainly need to finish exploring our own Solar System before we can contemplate exploring beyond it.

In the end, a lot of time and patience will be needed before we can start to venture beyond the Kuiper Belt and Oort Cloud to see what’s out there.

Further Reading: ResearchGate

NASA Has Tested a New Fission Space Reactor that Could be Used in Future Missions

Looking to the future of crewed space exploration, it is clear to NASA and other space agencies that certain technological requirements need to be met. Not only are a new generation of launch vehicles and space capsules needed (like the SLS and Orion spacecraft), but new forms of energy production are needed to ensure that long-duration missions to the Moon, Mars, and other locations in the Solar System can take place.

One possibility that addresses these concerns is Kilopower, a lightweight fission power system that could power robotic missions, bases and exploration missions. In collaboration with the Department of Energy’s National Nuclear Security Administration (NNSA), NASA recently conducted a successful demonstration of a new nuclear reactor power system that could enable long-duration crewed missions to the Moon, Mars, and beyond.

Known as the Kilopower Reactor Using Stirling Technology (KRUSTY) experiment, the technology was unveiled at a recent news conference on Wednesday, May 2nd, at NASA’s Glenn Research Center. According to NASA, this power system is capable of generating up to 10 kilowatts of electrical power – enough power several households continuously for ten years, or an outpost on the Moon or Mars.

NASA and NNSA engineers lower the wall of the vacuum chamber around the Kilowatt Reactor Using Stirling TechnologY (KRUSTY system). Credits: Los Alamos National Laboratory

As Jim Reuter, NASA’s acting associate administrator for the Space Technology Mission Directorate (STMD), explained in a recent NASA press release:

“Safe, efficient and plentiful energy will be the key to future robotic and human exploration. I expect the Kilopower project to be an essential part of lunar and Mars power architectures as they evolve.”

The prototype power system employs a small solid uranium-235 reactor core and passive sodium heat pipes to transfer reactor heat to high-efficiency Stirling engines, which convert the heat to electricity. This power system is ideally suited to locations like the Moon, where power generation using solar arrays is difficult because lunar nights are equivalent to 14 days on Earth.

In addition, many plans for lunar exploration involve building outposts in the permanently-shaded polar regions or in stable underground lava tubes. On Mars, sunshine is more plentiful, but subject to the planet’s diurnal cycle and weather (such as dust storms). This technology could therefore ensure a steady supply of power that is not dependent on intermittent sources like sunlight. As Marc Gibson, the lead Kilopower engineer at Glenn, said:

“Kilopower gives us the ability to do much higher power missions, and to explore the shadowed craters of the Moon. When we start sending astronauts for long stays on the Moon and to other planets, that’s going to require a new class of power that we’ve never needed before.”

Artist’s impression of four KRUSTY generators providing power to an outpost on the surface of Mars. Credit: NASA/STMD

The Kilopower experiment was conducted at the NNSA’s Nevada National Security Site (NNSS) between November and March of 2017. In addition to demonstrating that the system could produce electricity through fission, the purpose of the experiment was also to show that it is stable and safe in any environment. For this reason, the Kilopower team conduct in the experiment in four phases.

The first two phases, which were conducted without power, confirmed that each component in the system functioned properly. For the third phase, the team increased power to heat the core slowly before moving on to phase four, which consisted of a 28-hour, full-power test run. This phase simulated all stages of a mission, which included a reactor startup, ramp up to full power, steady operation and shutdown.

Throughout the experiment, the team simulated various system failures to ensure that the system would keep working – which included power reductions, failed engines and failed heat pipe. Throughout, the KRUSTY generator kept on providing electricity, proving that it can endure whatever space exploration throws at it. As Gibson indicated:

“We put the system through its paces. We understand the reactor very well, and this test proved that the system works the way we designed it to work. No matter what environment we expose it to, the reactor performs very well.”

A Kilopower reactor could allow for permanent bases on the Moon and Mars and allow for the local production of fuel and other materials. Credit: ESA/Foster + Partners

Looking ahead, the Kilopower project will remain a part of NASA’s Game Changing Development (GCD) program. As part of NASA’s Space Technology Mission Directorate (STMD), this program’s goal is to advance space technologies that may lead to entirely new approaches for the Agency’s future space missions. Eventually, the team hopes to make the transition to the Technology Demonstration Mission (TDM) program by 2020.

If all goes well, the KRUSTY reactor could allow for permanent human outposts on the Moon and Mars. It could also offer support to missions that rely on In-situ Resource Utilization (ISRU) to produce hydrazine fuel from local sources of water ice, and building materials from local regolith.

Basically, when robotic missions are mounted to the Moon to 3D print bases out of local regolith, and astronauts begin making regular trips to the Moon to conduct research and experiments (like they do today to the International Space Station), it could be KRUSTY reactors that provide them will all their power needs. In a few decades, the same could be true for Mars and even locations in the outer Solar System.

This reactor system could also pave the way for rockets that rely on nuclear-thermal or nuclear-electric propulsion, enabling missions beyond Earth that are both faster and more cost-effective!

And be sure to enjoy this video of the GCD program, courtesy of NASA 360:

Further Reading: NASA

NASA is Investigating a Self-Assembling Space Telescope

NASA has some pretty advanced concepts in mind when it comes to the next generation of space telescopes. These include the Transiting Exoplanet Survey Satellite (TESS), which recently took to space, as well as the James Webb Space Telescope (JWST) (scheduled to launch in 2020) and the Wide-Field Infrared Survey Telescope (WFIRST), which is still in development.

Beyond these, NASA has also identified several promising proposals as part of its 2020 Decadal Survey for Astrophysics. But perhaps the most ambitious concept is one that calls for a space telescope made up of modules that would assemble themselves. This concept was recently selected for Phase I development as part of the 2018 NASA Innovative Advanced Concepts (NIAC) program.

The team behind this concept is led by Dmitri Savransky, an assistant professor of mechanical and aerospace engineering at Cornell University. Along with 15 colleagues from across the US, Savransky has produced a concept for a ~30 meter (100 foot) modular space telescope with adaptive optics. But the real kicker is the fact that it would be made up of a swarm of modules that would assemble themselves autonomously.

On March. 23rd, 16 concepts received a Phase I award as part of the NASA Innovative Advanced Concepts (NIAC) program. Credit: NASA

Prof. Savransky is well-versed in space telescopes and exoplanet hunting, having assisted in the integration and testing of the Gemini Planet Imager – an instrument on the Gemini South Telescope in Chile. He also participated in the planning of the Gemini Planet Imager Exoplanet Survey, which discovered a Jupiter-like planet orbiting 51 Eridani (51 Eridani b) in 2015.

But looking to the future, Prof. Savransky believes that self-assembly is the way to go to create a super telescope. As he and his team described the telescope in their proposal:

“The entire structure of the telescope, including the primary and secondary mirrors, secondary support structure and planar sunshield will be constructed from a single, mass-produced spacecraft module. Each module will be composed of a hexagonal ~1 m diameter spacecraft topped with an edge-to-edge, active mirror assembly.”

These modules would be launched independently and then navigate to the Sun-Earth L2 point using deployable solar sails. These sails will then become the planar telescope sunshield once the modules come together and assemble themselves, without the need for human or robotic assistance. While this may sound radically advanced, it is certainly in keeping with what the NIAC looks for.

“That’s what the NIAC program is,” said Dr. Savransky in recent interview with the Cornell Chronicle. “You pitch these somewhat crazy-sounding ideas, but then try to back them up with a few initial calculations, and then it’s a nine-month project where you’re trying to answer feasibility questions.”

Artist’s concept of the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR) space telescope. Credits: NASA/GSFC

As part of the 2018 NAIC’s Phase I awards, which were announced on March 30th, the team was awarded $125,000 over a nine month period to conduct these studies. If these are successful, the team will be able to apply for a Phase II award. As Mason Peck, an associate professor of mechanical and aerospace engineering at Cornell and the former chief technology officer at NASA, indicated, Savransky is on the right track with his NIAC proposal:

“As autonomous spacecraft become more common, and as we continue to improve how we build very small spacecraft, it makes a lot of sense to ask Savransky’s question: Is it possible to build a space telescope that can see farther, and better, using only inexpensive small components that self-assemble in orbit?”

The target mission for this concept is the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR), a proposal that is currently being explored as part of NASA’s 2020 Decadal Survey. As one of two concepts being investigated by NASA’s Goddard Space Flight Center, this mission concept calls for a space telescope with a massive segmented primary mirror that measures about 15 meters (49 feet) in diameter.

Much like the JWST, LUVOIR’s mirror would be made up of adjustable segments that would unfold once it deployed to space. Actuators and motors would actively adjust and align these segments in order to achieve the perfect focus and capture light from faint and distant objects. The primary aim of this mission would be to discover new exoplanets as well as analyze light from those that have already been discovered to asses their atmospheres.

The Hubble Space Telescope on the left has a 2.4 meter mirror and the James Webb Space Telescope has a 6.5 meter mirror. LUVOIR, not shown, will dwarf them both with a massive 15 meter mirror. Image: NASA
The Hubble Space Telescope on the left has a 2.4 meter mirror and the James Webb Space Telescope has a 6.5 meter mirror. LUVOIR, not shown, will dwarf them both with a massive 15 meter mirror. Image: NASA

As Savransky and his colleagues indicated in their proposal, their concept is directly in line with the priorities of the NASA Technology Roadmaps in Science Instruments, Observatories, and Sensor Systems and Robotics and Autonomous Systems. They also state that the architecture is a credible means to construct a giant space telescope, which would not be possible for previous generations of telescopes like Hubble and the JWST.

“James Webb is going to be the largest astrophysical observatory we’ve ever put in space, and it’s incredibly difficult,” he said. “So going up in scale, to 10 meters or 12 meters or potentially even 30 meters, it seems almost impossible to conceive how you would build those telescopes the same way we’ve been building them.”

Having been granted a Phase I award, the team is planning to conduct detailed simulations of how the modules would fly through space and rendezvous with each other to determine how large the solar sails need to be. They also plan to conduct an analysis of the mirror assembly to validate that the modules could achieve the required surface figure once assembled.

As Peck indicated, if successful, Dr. Savransky’s proposal could be a game changer:

“If Professor Savransky proves the feasibility of creating a large space telescope from tiny pieces, he’ll change how we explore space. We’ll be able to afford to see farther, and better than ever – maybe even to the surface of an extrasolar planet.”

On June 5th and 6th, NASA will also be conducting an NIAC Orientation Meeting in Washington D.C., where all the Phase I winners will have a chance to meet and discuss their ideas. Other proposals that received a Phase I award include shape-shifting robots for exploring Titan, lightweight aerial sensors to explore Venus’ atmosphere, flapping-wing swarm robots to explore Mars, a new form of beam propulsion for interstellar missions (similar to Breakthrough Starshot),  a steam-powered robot for ocean worlds, and a self-replicating habitat made from fungus.

You can read more about these concepts, as well as those that were given Phase II award, here.

Further Reading: Cornell Chronicle, NASA

Air-Breathing Electric Thruster Could Keep Satellites in Low Earth Orbit for Years

When it comes to the future of space exploration, one of the greatest challenges is coming up with engines that can maximize performance while also ensuring fuel efficiency. This will not only reduce the cost of individual missions, it will ensure that robotic spacecraft (and even crewed spacecraft) can operate for extended periods of time in space without having to refuel.

In recent years, this challenge has led to some truly innovative concepts, one of which was recently build and tested for the very first time by an ESA team. This engine concept consists of an electric thruster that is capable of “scooping” scarce air molecules from the tops of atmospheres and using them as propellant. This development will open the way for all kinds of satellites that can operate in very low orbits around planets for years at a time.

The concept of an air-breathing thruster (aka. Ram-Electric Propulsion) is relatively simple. In short, the engine works on the same principles as a ramscoop (where interstellar hydrogen is collected to provide fuel) and an ion engine – where collected particles are charged and ejected. Such an engine would do away with onboard propellant by taking in atmospheric molecules as it passed through the top of a planet’s atmosphere.

The test set-up for the air-breathing electric propulsion thruster recently developed by Sitael and QuinteScience in conjunction with the ESA. Credit: ESA/Sitael

The concept was the subject of a study titled “RAM Electric Propulsion for Low Earth Orbit Operation: An ESA Study“, which was presented at the 30th International Electric Propulsion Conference in 2007. The study emphasized how “Low Earth orbit satellites are subject to atmospheric drag and thus their lifetimes are limited with current propulsion technologies by the amount of propellant they can carry to compensate for it.”

The study’s authors also indicated how satellites using high specific impulse electric propulsion would be capable of compensating for drag during low altitude operation for an extended period of time. But as they conclude, such a mission would also be limited to the amount of fuel it could carry. This was certainly the case for the ESA’s Gravity field and steady-state Ocean Circulation Explorer (GOCE) gravity-mapper satellite,

While GOCE remained in orbit of Earth for more than four years and operated at altitudes as low as 250 km (155 mi), its mission ended the moment it exhausted its 40 kg (88 lbs) supply of xenon as propellant. As such, the concept of an electric propulsion system that an utilize atmospheric molecules as propellant has also been investigated. As Dr. Louis Walpot of the ESA explained in an ESA press release:

“This project began with a novel design to scoop up air molecules as propellant from the top of Earth’s atmosphere at around 200 km altitude with a typical speed of 7.8 km/s.”

Diagram illustrated how air-breathing electric propulsion works. Credit: ESA–A. Di Giacomo

To develop this concept, the Italian aerospace company Sitael and the Polish aerospace company QuinteScience teamed up to create a novel intake and thruster design. Whereas QuinteScience built an intake that would collect and compress incoming atmospheric particles, Sitael developed a dual-stage thruster that would charge and accelerate these particles to generate thrust.

The team then ran computer simulations to see how particles would behave across a range of intake options. But in the end, they chose to conduct a practice test to see if the combined intake and thruster would work together or not. To do this, the team tested it in a vacuum chamber at one of Sitael’s test facilities. The chamber simulated an environment at 200 km altitude while a “particle flow generator” provided the oncoming high-speed molecules.

To provide a more complete test and make sure the thruster would function in a low-pressure environment, the team began by igniting it with xenon-propellant. As Dr. Walpot explained:

“Instead of simply measuring the resulting density at the collector to check the intake design, we decided to attach an electric thruster. In this way, we proved that we could indeed collect and compress the air molecules to a level where thruster ignition could take place, and measure the actual thrust. At first we checked our thruster could be ignited repeatedly with xenon gathered from the particle beam generator.”

Fired at first using standard xenon propellant, the test thruster was then shifted to atmospheric air, proving the principle of air-breathing electric propulsion. Credit: ESA

As a next step, the team partially replace xenon with a nitrogen-oxygen air mixture to simulate Earth’s upper atmosphere. As hoped, the engine kept firing, and the only thing that changed was the color of the thrust.

“When the xenon-based blue color of the engine plume changed to purple, we knew we’d succeeded,” said Dr. Walpot. “The system was finally ignited repeatedly solely with atmospheric propellant to prove the concept’s feasibility. This result means air-breathing electric propulsion is no longer simply a theory but a tangible, working concept, ready to be developed, to serve one day as the basis of a new class of missions.”

The development of air-breathing electric thrusters could allow for an entirely new class of satellite that could operate with the fringes of Mars’, Titan’s and other bodies atmospheres for years at a time. With this kind of operational lifespan, these satellites could gather volumes of data on these bodies’ meteorological conditions, seasonal changes, and the history of their climates.

Such satellites would also be very useful when it comes to observing Earth. Since they would be able to operate at lower altitudes than previous missions, and would not be limited by the amount of propellant they could carry, satellites equipped with air-breathing thrusters could operate for extended periods of time. As a result, they could offer more in-depth analyses on Climate Change, and monitor meteorological patterns, geological changes, and natural disasters more closely.

Further Reading: ESA

Space Catapult Startup SpinLaunch has Come Out of Stealth Mode. Space catapults? Yes Please

Of all challenges presented by space exploration – and to be fair, there are many! – one of the greatest is the cost. When it comes right down to it, launching disposable rockets from Earth and getting them to the point where they can achieve escape velocity and reach space is expensive. In addition, these rockets need to be big, powerful, and be able to hold a lot of fuel in order to lift spacecraft or cargo.

It is for this reason that so many efforts in the past few decades have been focused on reducing costs of individual launches. Between reusable rockets and reusable spacecraft (i.e the Space Shuttle), there are plenty of ways to make launch vehicles cheaper. But to the Jonathan Yaney, the founder of SpinLaunch, a real cost-cutting solution is to propel smaller payloads into orbit using a space catapult instead.

The concept of a space catapult is simple, and has been explored at length since the beginning of the Space Age. Also known as a mass driver or coilgun, the concept relies on a set of powerful electromagnetic rails to accelerate spacecraft or payloads to escape velocity and launch them horizontally. Since the 1960s, NASA has been exploring the concept as an alternative to conducting rocket launches.

The Magnetic Levitation (MagLev) System evaluated at NASA’s Marshall Space Flight Center. Credit: NASA

In addition, NASA has been continued to develop this technology through the Marshall Space Flight Center and the Kennedy Space Center. Here, engineers have been working on ways to launch spacecraft horizontally using scramjets on an electrified track or gas-powered sled. A good example of this is the Magnetic Levitation (MagLev) System which uses the same technology as a maglev train to accelerate a small space plane into orbit.

Another variation of the concept involves a centrifuge, where the spacecraft or cargo is accelerated on a circular track until it reaches escape velocity (and then launches). This concept was proposed by Dr. Derek Tidman – a physicists who specialized in electrothermal and electromagnetic acceleration – in the 1990s. Known as the Slingatron, this version of the space catapult is currently being researched by HyperV Technologies.

However, these ideas were never adopted because vast improvements were needed in terms of electromagnetic induction technology in order to achieve the speeds necessary to put heavy payloads into space. But thanks to improvements in high-speed maglev trains, recent attempts to create Hyperloop pods and tracks, and the growth of the commercial aerospace market, the time may be ripe to revisit this concept.

Such is the hope of Jonathan Yaney, an aerospace enthusiast who has a long history of co-founding startups. As he describes himself, Yaney is a “serial entrepreneur” who has spent the past 15 years founding companies in the fields of consulting, IT, construction, and aerospace. Now, he has founded SpinLaunch with the intention of launching satellites into space.

SpinLaunch’s company logo. Credit: SpinLaunch

And while Yaney has been known for being rather recluse, TechCrunch recently secured an exclusive interview and gained access to the company hangar. According to multiple sources that they cite, Yaney and the company he founded are launching a crowdfunding campaign to raise the $30 million in Series A funding to develop the catapult technology. In the course of the interview, Yaney expressed his vision for space exploration as follows:

“Since the dawn of space exploration, rockets have been the only way to access space. Yet in 70 years, the technology has only made small incremental advances. To truly commercialize and industrialize space, we need 10x tech improvement.”

According to a source cited by TechCrunch, SpinLaunch own design would apparently involve a centrifuge that accelerates payloads to speeds of up to 4,828 km/h (3,000 mph). Additionally, the cargo could be equipped with supplemental rockets in order to escape Earth’s atmosphere. By replacing rocket boosters with a kinetic launch system, SpinLaunch’s concept would rely on principles that are similar to those explored by NASA.

But as he went on to explain, the method his company is exploring is different. “SpinLaunch employs a rotational acceleration method, harnessing angular momentum to gradually accelerate the vehicle to hypersonic speeds,” he said. “This approach employs a dramatically lower cost architecture with much lower power.” Utilizing this technology, Yaney estimates that the costs of individual launches could be reduced to $500,000 – essentially, by a factor of 10 to 200.

A lunar base, as imagined by NASA in the 1970s. Credit: NASA

Not much else is known about this startup. According to Bloomberg Financial, little is known about the company or its founder beyond a brief description. However, according to SEC documents cited by TechCrunch, Yaney was able to raise $1 million in equity in 2014 and $2.9 million in 2015 before being $2.2. million dollars in debt by mid-2017 and another $2 million in debt by late 2017.

Luckily, the Hawaii state senate introduced a bill last month that proposed issuing $25 million in bonds to assist SpinLaunch with the construction of its space catapult. Hawaii also hopes to gain construction contracts for the launch system, as part of its commitment to making space accessible. As it states in the bill:

“[T]he department of budget and finance, with the approval of the governor, is authorized to issue special purpose revenue bonds in a total amount not to exceed $25,000,000, in one or more series, for the purpose of assisting SpinLaunch Inc., a Delaware corporation, in financing the costs relating to the planning, design, construction, equipping, acquisition of land, including easements or other interests therein, and other tangible assets for an electrically powered, kinetic launch system to transport small satellites into low Earth orbit.”

In the meantime, Yaney is looking to the public and to several big venture capital firms to raise the revenue he needs to make his vision become a reality. Of course, beyond the issue of financing, there are several technical barriers which still need to be addressed before a space catapult could be realized. The most obvious of these is how to overcome the air resistance produced by Earth’s dense atmosphere.

However, Yaney was optimistic in his interview with TechCrunch, claiming that his company is investigating these and other challenges:

“During the last three years, the core technology has been developed, prototyped, tested and most of the tech risk retired. The remaining challenges are in the construction and associated areas that all very large hardware development and construction projects face.”

There’s no indication of when such a system might be complete, but that’s to be expected at this point. However, with the support of the Hawaiian government and some additional capital, his company is likely to secure its Series A funding and begin moving to the next phase of development. Much like the Hyperloop, this concept may prove to be one of those ideas that keeps advancing because of the people who are willing to make it happen!

And be sure to check out this video about SpinLaunch’s crowdfunding campaign, courtesy of Scott Manley:

Further Reading: TechCrunch

Here’s How SpaceX is Planning to Recover Rocket Fairings: a Boat With a Net Called Mr. Steven

When visionary entrepreneur Elon Musk founded SpaceX in 2002, he did so with the intention of rekindling human space exploration and sending humans to Mars. Intrinsic to this vision was the reduction of costs associated with individual launches, which has so far been focused on the development of reusable first-stage rockets. However, the company recently announced that they are looking to make their rocket’s payload fairings reusable as well.

The payload fairing is basically the disposable shell at the top of the rocket that protects the cargo during launch. Once the rocket reaches orbit, the fairings falls away to release the payload to space and are lost. But if they could be retrieved, it would reduce launch cost by additional millions. Known as “Mr. Steven”, this new retrieval system consists of a platform ship, extended arms, and a net strung between them.

Mr. Steven is not unlike SpaceX’s Autonomous Spaceport Drone Ships (ASDS), which are used to retrieve first stage rocket boosters at sea. SpaceX has two operational drone ships, including Just Read the Instructions – which is stationed in the Pacific to retrieve launches from Vandenberg – and Of Course I Still Love You, which is stationed in the Atlantic to retrieve launches from Canaveral.

The first ten IridiumNEXT satellites are stacked and encapsulated in the Falcon 9 fairing for launch from Vandenberg Air Force Base, Ca., in early 2017. Credit: Iridium

Recently, Teslarati’s Pauline Acalin captured some photographs of Mr. Steven while it was docked on the California coast near Vandenberg Air Force Base, where it preparing to head out to sea in support of the latest Falcon 9 launch. Known as the PAZ Mission, this launch will place a series of Spanish imaging satellites in orbit, as well as test satellites that will be part of SpaceX’s plan to provide broadband internet service.

Originally scheduled for Wednesday, February 21st, the launch was scrubbed due to strong upper level winds. It is currently scheduled to take place at 6:17 a.m. PST (14:17 UTC) on Thursday, February 22nd, from Space Launch Complex 4 East (SLC-4E) at the Vandenburg Air Force Base. After the cargo is deployed to orbit, the fairings will fall back slowly to Earth thanks to a set of geotagged parachutes.

These chutes will guide the fairings down to the Pacific Ocean, where Mr. Steven will sail to meet them. The fairings, if all goes as planned, will touch down gently into the net and be recovered for later use. In March of 2017, SpaceX successfully recovered a fairing for the first time, which allowed them to recoup an estimated $6 million dollars from that launch.

At present, SpaceX indicates that the cost of an individual Falcon 9 launch is an estimated $62 million. If the payload fairings can be recovered regularly, that means that the company stands to recoup an additional 10% of every individual Falcon 9 launch.

This news comes on the heels of SpaceX having successfully launched their Falcon Heavy rocket, which carried a Tesla Roadster with “Spaceman” into orbit. The launch was made all the more impressive due to the fact that two of the three rocket boosters used were successfully recovered. The core booster unfortunately crashed while attempted to land on one of the ASDS at sea.

At this rate, SpaceX may even start trying to recover their rocket’s second stages in the not-too-distant future. If indeed all components of a rocket are reusable, the only costs associated with individual launches will be the one-time manufacturing cost of the rocket, the cost of fuel, plus any additional maintenance post-launch.

For fans of space exploration and commercial aerospace, this is certainly exciting news! With every cost-cutting measure, the possibilities for scientific research and crewed missions increase exponentially. Imagine a future where it costs roughly the same to deploy space habitats to orbit as it does to deploy commercial satellites, and sending space-based solar arrays to orbit (and maybe even building a space elevator) is financially feasible!

It might sound a bit fantastic, but when the costs are no longer prohibitive, a lot of things become possible.

Further Reading: Teslatari, TechCrunch

Russia and China Are Working on Space and Counterspace Weapons

Every year, the Department of National Intelligence (DNI) releases its Worldwide Threat Assessment of the US Intelligence Community. This annual report contains the intelligence community’s assessment of potential threats to US national security and makes recommendations accordingly. In recent years, these threats have included the development and proliferation of weapons, regional wars, economic trends, terrorism, cyberterrorism, etc.

This year’s assessment, which was released on February 8th, 2018, was certainly a mixed bag of warnings. Among the many potential threats to national security, the authors emphasized the many recent developments taking place in space. According to their assessment, the expansion of the global space industry, growing cooperation between the private and public sector, and the growth of various states in space, could constitute a threat to US national security.

Naturally, the two chief actors that are singled out were China and Russia. As they indicate, these countries will be leading the pack in the coming years when it comes to expanding space-based reconnaissance, communications and navigation systems. This will not only enable their abilities (and those of their allies) when it comes to space-based research, but will have military applications as well.

The second flight of the Long March 5 lifting off from Wenchang on July 2nd, 2017. Credit: CNS

As they state in the section of the report titled “Space and Counhttps://www.dni.gov/files/documents/Newsroom/Testimonies/2018-ATA—Unclassified-SSCI.pdfterspace“:

“Continued global space industry expansion will further extend space-enabled capabilities and space situational awareness to nation-state, nonstate, and commercial space actors in the coming years, enabled by the increased availability of technology, private-sector investment, and growing international partnerships for shared production and operation… All actors will increasingly have access to space-derived information services, such as imagery, weather, communications, and positioning, navigation, and timing for intelligence, military, scientific, or business purposes.”

A key aspect of this development is outlined in the section titled “Emerging and Disruptive Technology,” which addresses everything from the development of AI and internet technologies to additive manufacturing and advanced materials. In short, it not just the development of new rockets and spacecraft that are at issue here, but the benefits brought about by cheaper and lighter materials, more rapid information sharing and production.

“Emerging technology and new applications of existing technology will also allow our adversaries to more readily develop weapon systems that can strike farther, faster, and harder and challenge the United States in all warfare domains, including space,” they write.

Artist’s illustration of China’s 8-ton Tiangong-1 space station, which is expected to fall to Earth in late 2017. Credit: CMSE

Specifically, anti-satellite (ASAT) weapons are addressed as the major threat. Such technologies, according to the report, have the potential to reduce US and allied military effectiveness by disrupting global communications, navigation and coordination between nations and armies. These technologies could be destructive, in the form of anti-satellite missiles, but also nondestructive – i.e. electromagnetic pulse (EMP) devices. As they indicate:

“We assess that, if a future conflict were to occur involving Russia or China, either country would justify attacks against US and allied satellites as necessary to offset any perceived US military advantage derived from military, civil, or commercial space systems. Military reforms in both countries in the past few years indicate an increased focus on establishing operational forces designed to integrate attacks against space systems and services with military operations in other domains.”

The authors further anticipate that Russian and Chinese destructive ASAT technology could reach operational capacity within a few years time. To this end, they cite recent changes in the People’s Liberation Army (PLA), which include the formation of military units that have training in counter-space operations and the development of ground-launched ASAT missiles.

While they are not certain about Russia’s capability to wage ASAT warfare, they venture that similar developments are taking place. Another area of focus is the development of directed-energy weapons for the purpose of blinding or damaging space-based optical sensors. This technology is similar to what the US investigated decades ago for the sake of strategic missile defense – aka. the Strategic Defense Initiative (SDI).

An artist’s concept of a Space Laser Satellite Defense System. Credit: USAF

While these weapons would not be used to blow up satellites in the conventional sense, they would be capable of blinding or damaging sensitive space-based optical sensors. On top of that, the report cites how Russia and China continue to conduct on-orbit activities and launching satellites that are deemed “experimental”. A good example of this was a recent proposal made by researchers from the Information and Navigation College at China’s Air Force Engineering University.

The study which detailed their findings called for the deployment of a high-powered pulsed ablative laser that could be used to break up space junk. While the authors admit that such technology can have peaceful applications – ranging from satellite inspection, refueling and repair – they could also be used against other spacecraft. While the United States has been researching the technology for decades, China and Russia’s growing presence in space threatens to tilt this balance of power.

Moreover, there are the loopholes in the existing legal framework – as outlined in the Outer Space Treaty – which the authors believe China and Russia are intent on exploiting:

“Russia and China continue to publicly and diplomatically promote international agreements on the nonweaponization of space and “no first placement” of weapons in space. However, many classes of weapons would not be addressed by such proposals, allowing them to continue their pursuit of space warfare capabilities while publicly maintaining that space must be a peaceful domain.”

Artist’s impression of a laser removing orbital debris, based on NASA pictures. Credit: Fulvio314/NASA/Wikipedia Commons

For example, the Outer Space Treaty bars signatories from placing weapons of mass destruction in orbit of Earth, on the Moon, on any other celestial body, or in outer space in general. By definition, this referred to nuclear devices, but does not extend to conventional weapons in orbit. This leaves room for antisatellite platforms or other conventional space-based weapons that could constitute a major threat.

Beyond China and Russia, the report also indicates that Iran’s growing capabilities in rocketry and missile technology could pose a threat down the road. As with the American and Russian space programs, developments in space rocketry and ICBMs are seen as being complimentary to each other:

“Iran’s ballistic missile programs give it the potential to hold targets at risk across the region, and Tehran already has the largest inventory of ballistic missiles in the Middle East. Tehran’s desire to deter the United States might drive it to field an ICBM. Progress on Iran’s space program, such as the launch of the Simorgh SLV in July 2017, could shorten a pathway to an ICBM because space launch vehicles use similar technologies.”

All told, the report makes some rather predictable assessments. Given China and Russia’s growing power in space, it is only natural that the DNI would see this as a potential threat. However, that does not mean that one should assume an alarmist attitude. When it comes to assessing threats, points are awarded for considering every contingency. But if history has taught us anything, it’s that assessment and realization are two very different things.

Remember Sputnik? The lesson there was clear. Don’t panic!

Further Reading: DNI