Telescopes have come a long way in the past few centuries. From the comparatively modest devices built by astronomers like Galileo Galilei and Johannes Kepler, telescopes have evolved to become massive instruments that require an entire facility to house them and a full crew and network of computers to run them. And in the coming years, much larger observatories will be constructed that can do even more.
Unfortunately, this trend towards larger and larger instruments has many drawbacks. For starters, increasingly large observatories require either increasingly large mirrors or many telescopes working together – both of which are expensive prospects. Luckily, a team from MIT has proposed combining interferometry with quantum-teleportation, which could significantly increase the resolution of arrays without relying on larger mirrors.
Space junk is a growing problem. For decades we have been sending satellites into orbit around Earth. Some of them de-orbit and burn up in Earth’s atmosphere, or crash into the surface. But most of the stuff we send into orbit is still up there.
This is becoming an acute problem as years go by and we launch more and more hardware into orbit. Since the very first satellite—Sputnik 1—was launched into orbit in 1957, over 8000 satellites have ben placed in orbit. As of 2018, an estimated 4900 are still in orbit. About 3000 of those are not operational. They’re space junk. The risk of collision is growing, and scientists are working on solutions. The problem will compound itself over time, as collisions between objects create more pieces of debris that have to be dealt with.
These missions will look farther into the cosmos than ever before and help astronomers address questions like how the Universe evolved and if there is life in other star systems. Unfortunately, all these missions have two things in common: in addition to being very large and complex, they are also very expensive. Hence why some scientists are proposing that we rely on more cost-effective ideas like swarm telescopes.
Two such scientists are Jayce Dowell and Gregory B. Taylor, a research assistant professor and professor (respectively) with the Department of Physics and Astronomy at the University of New Mexico. Together, the pair outlined their idea in a study titled “The Swarm Telescope Concept“, which recently appeared online and was accepted for publication by the Journal of Astronomical Instrumentation.
As they state in their study, traditional astronomy has focused on the construction, maintenance and operation of single telescopes. The one exception to this is radio astronomy, where facilities have been spread over an extensive geographic area in order to obtain high angular resolution. Examples of this include the Very Long Baseline Array (VLBA), and the proposed Square Kilometer Array (SKA).
In addition, there’s also the problem of how telescopes are becoming increasingly reliant on computing and digital signal processing. As they explain in their study, telescopes commonly carry out multiple simultaneous observation campaigns, which increases the operational complexity of the facility due to conflicting configuration requirements and scheduling considerations.
A possible solution, according to Dowell and Taylor, is to rethink telescopes. Instead of a single instrument, the telescope would consist of a distributed array where many autonomous elements come together through a data transport system to function as a single facility. This approach, they claim, would be especially useful when it comes to the Next Generation Very Large Array (NGVLA) – a future interferometer that will build on the legacy of the Karl G. ansky Very Large Array and Atacama Large Millimeter/submillimeter Array (ALMA). As they state in their study:
“At the core of the swarm telescope is a shift away from thinking about an observatory as a monolithic entity. Rather, an observatory is viewed as many independent parts that work together to accomplish scientific observations. This shift requires moving part of the decision making about the facility away from the human schedulers and operators and transitioning it to “software defined operators” that run on each part of the facility. These software agents then communicate with each other and build dynamic arrays to accomplish the goals of multiple observers, while also adjusting for varying observing conditions and array element states across the facility.”
This idea for a distributed telescope is inspired by the concept of swarm intelligence, where large swarms of robots are programmed to interact with each other and their environment to perform complex tasks. As they explain, the facility comes down to three major components: autonomous element control, a method of inter-element communication, and data transport management.
Of these components, the most critical is the autonomous element control which governs the actions of each element of the facility. While similar to traditional monitoring and control systems used to control individual robotic telescopes, this system would be different in that it would be responsible for far more. Overall, the element control would be responsible for ensuring the safety of the telescope and maximizing the utilization of the element.
“The first, safety of the element, requires multiple monitoring points and preventative actions in order to identify and prevent problems,” they explain. “The second direction requires methods of relating the goals of an observation to the performance of an element in order to maximize the quantity and quality of the observations, and automated methods of recovering from problems when they occur.”
The second component, inter-element communication, is what allows the individual elements to come together to form the interferometer. This can take the form of a leaderless system (where there is no single point of control), or an organizer system, where all of the communication between the elements and with the observation queue is done through a single point of control (i.e. the organizer).
Lastly, their is the issue of data transport management, which can take one of two forms based on existing telescopes. These include fully 0ff-line systems, where correlation is done post-observation – used by the Very Long Baseline Array (VLBA) – to fully-connected systems, where correlation is done in real-time (as with the VLA). For the sake of their array, the team emphasized how connectivity and correlation are a must.
After considering all these components and how they are used by existing arrays, Dowell and Taylor conclude that the swarm concept is a natural extension of the advances being made in robotic and thinking telescopes, as well as interferometry. The advantages of this are spelled out in their conclusions:
“It allows for more efficient operations of facilities by moving much of the daily operational work done by humans to autonomous control systems. This, in turn, frees up personnel to focus on the scientific output of the telescope. The swarm concept can also combine the unused resources of the different elements together to form an ad hoc array.”
In addition, swarm telescopes will offer new opportunities and funding since they will consist of small elements that can be owned and operated by different entities. In this way, different organizations would be able to conduct science with their own elements while also being able to benefit from large-scale interferometric observations.
This concept is similar to the Modular Active Self-Assembling Space Telescope Swarms, which calls for a swarm of robots that would assemble in space to form a 30 meter (~100 ft) telescope. The concept was proposed by a team of American astronomers led by Dmitri Savransky, an assistant professor of mechanical and aerospace engineering at Cornell University.
This proposals was part of the 2020 Decadal Survey for Astrophysics and was recently selected for Phase I development as part of the 2018 NASA Innovative Advanced Concepts (NIAC) program. So while many large-scale telescopes will be entering service in the near future, the next-next-generation of telescopes could include a few arrays made up of swarms of robots directed by artificial intelligence.
Such arrays would be capable of achieving high-resolution astronomy and interferometry at lower costs, and could free up large, complex arrays for other observations.
When it comes to the new era of space exploration, one of the primary focuses has been on cutting costs. By reducing the costs associated with individual launches, space agencies and private aerospace companies will not only be able to commercialize Low Earth-Orbit (LEO), but also mount far more in the way of exploration missions and maybe even colonize space.
Several methods have been proposed so far for reducing launch costs, which include reusable rockets and single-stage-to-orbit rockets. However, a team of engineers from the University of Glasgow and the Ukraine recently proposed an entirely different idea that could make launching small payloads affordable – a self-eating rocket! This “autophage” rocket could easily send small satellites into space more easily and more affordably.
The study which describes how they built and tested the “autophage” engine recently appeared in the Journal of Spacecraft and Rockets under the title “Autophage Engines: Toward a Throttleable Solid Motor“. The team was led by Vitaly Yemets and Patrick Harkness – a Professor from the Oles Honchar Dnipro National University in the Ukraine and a Senior Lecturer from the University of Glasgow, respectively.
Together, the team addressed one the most pressing issues when it comes to rockets today. This has to do with the fact that storage tanks, which contain the rocket’s propellants as they climb, weight many times the spacecraft’s payload. This reduces the efficiency of the launch vehicle and also adds to the problem of space debris, since these fuel tanks are disposable and fall away when spent.
As Dr Patrick Harkness, who led Glasgow’s contribution to the work, explained in a recent University of Glasgow press release:
“Over the last decade, Glasgow has become a centre of excellence for the UK space industry, particularly in small satellites known as ‘CubeSats’, which provide researchers with affordable access to space-based experiments. There’s also potential for the UK’s planned spaceport to be based in Scotland. However, launch vehicles tend to be large because you need a large amount of propellant to reach space. If you try to scale down, the volume of propellant falls more quickly than the mass of the structure, so there is a limit to how small you can go. You will be left with a vehicle that is smaller but, proportionately, too heavy to reach an orbital speed.”
In contrast, an autophage engine consumes its own structure during ascent, so more cargo capacity could be freed-up and less debris would enter orbit. The propellant consists of a solid fuel rod (made of a solid plastic like polyethylene) on the outside and an oxidizer on the inside. By driving the rod into a hot engine, the fuel and oxidizer are vaporized to create gas that then flows into the combustion chamber to produce thrust.
“A rocket powered by an autophage engine would be different,” said Dr. Harkness. “The propellant rod itself would make up the body of the rocket, and as the vehicle climbed the engine would work its way up, consuming the body from base to tip. That would mean that the rocket structure would actually be consumed as fuel, so we wouldn’t face the same problems of excessive structural mass. We could size the launch vehicles to match our small satellites, and offer more rapid and more targeted access to space.”
The research team also showed that the engine could be throttled by simply varying the speed at which the rod is driven into the engine, which is something rare in a solid motor. During the lab tests, the team has been able to sustain rocket operations for 60 seconds at a time. As Dr. Harkness said, the team hopes to build on this and eventually conduct a launch test:
“While we’re still at an early stage of development, we have an effective engine testbed in the laboratory in Dnipro, and we are working with our colleagues there to improve it still further. The next step is to secure further funding to investigate how the engine could be incorporated into a launch vehicle.”
Another challenge of the modern space age is how to deliver additional payloads and satellites into orbit without creating more in the way of orbital clutter. By introducing an engine that can make for cheap launches that also has no disposable parts, the autophage could be a game-changing technology, one which is right up there with fully-recoverable rockets.
The research team also consisted of Mykola Dron and Anatoly Pashkov – a Professor and Senior Researcher from Oles Honchar Dnipro National University – and Kevin Worrall and Michael Middleton – a Research Associate and M.S. student from the University of Glasgow.
Ever since NASA announced that they had created a prototype of the controversial Radio Frequency Resonant Cavity Thruster (aka. the EM Drive), any and all reported results have been the subject of controversy. Initially, any reported tests were the stuff of rumors and leaks, the results were treated with understandable skepticism. Even after the paper submitted by the Eagleworks team passed peer review, there have still been unanswered questions.
To recap, the EM Drive is a concept for an experimental space engine that came to the attention of the space community years ago. It consists of a hollow cone made of copper or other materials that reflects microwaves between opposite walls of the cavity in order to generate thrust. Unfortunately, this drive system is based on principles that violate the Conservation of Momentum law.
This law states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces. Since the EM Drive involves electromagnetic microwave cavities converting electrical energy directly into thrust, it has no reaction mass. It is therefore “impossible”, as far as conventional physics go.
As a result, many scientists have been skeptical about the EM Drive and wanted to see definitive evidence that it works. In response, a team of scientists at NASA’s Eagleworks Laboratories began conducting a test of the propulsion system. The team was led by Harold White, the Advanced Propulsion Team Lead for the NASA Engineering Directorate and the Principal Investigator for NASA’s Eagleworks lab.
In short, the TU Dresden team’s prototype consisted of a cone-shaped hollow engine set inside a highly shielded vacuum chamber, which they then fired microwaves at. While they found that the EM Drive did experience thrust, the detectable thrust may not have been coming from the engine itself. Essentially, the thruster exhibited the same amount of force regardless of which direction it was pointing.
This suggested that the thrust was originating from another source, which they believe could be the result of interaction between engine cables and the Earth’s magnetic field. As they conclude in their report:
“First measurement campaigns were carried out with both thruster models reaching thrust/thrust-to– power levels comparable to claimed values. However, we found that e.g. magnetic interaction from twisted-pair cables and amplifiers with the Earth’s magnetic field can be a significant error source for EMDrives. We continue to improve our measurement setup and thruster developments in order to finally assess if any of these concepts is viable and if it can be scaled up.”
In other words, the mystery thrust reported by previous experiments may have been nothing more than an error. If true, it would explain how the “impossible EM Drive” was able to achieve small amounts of measurable thrust when the laws of physics claim it shouldn’t be. However, the team also emphasized that more testing will be needed before the EM Drive can be dismissed or validated with confidence.
Alas, it seems that the promise of being able to travel to the Moon in just four hours, to Mars in 70 days, and to Pluto in 18 months – all without the need for propellant – may have to wait. But rest assured, many other experimental technologies are being tested that could one day allow us to travel within our Solar System (and beyond) in record time. And additional tests will be needed before the EM Drive can be written off as just another pipe dream.
The team also conducted their own test of the Mach-Effect Thruster, another concept that is considered to be unlikely by many scientists. The team reported more favorable results with this concept, though they indicated that more research is needed here as well before anything can be conclusively said. You can learn more about the team’s test results for both engines by reading their report here.
And be sure to check out this video by Scott Manley, who explains the latest test and its results
It’s a staple of science fiction, and something many people have fantasized about at one time or another: the idea of sending out spaceships with colonists and transplanting the seed of humanity among the stars. Between discovering new worlds, becoming an interstellar species, and maybe even finding extra-terrestrial civilizations, the dream of spreading beyond the Solar System is one that can’t become reality soon enough!
As we reviewed in a previous article, “How Long Would it Take to Travel to the Nearest Star?“, there are numerous proposed and theoretical ways to travel between our Solar System and other stars in the galaxy. However, beyond the technology involved, and the time it would take, there are also the biological and psychological implications for human crews that would need to be taken into account beforehand.
And thanks to the way public interest in space exploration has become renewed in recent years, cost-benefit analyses of all the possible methods is becoming increasingly necessary. As Dr. Braddock told Universe Today via email|:
“Interstellar travel has become more relevant because of the concerted effort to find ways across all of the space agencies to maintain human health in ‘short’ (2-3 yr) space travel. With Mars missions reasonably in sight, Stephen Hawking’s death highlighting one his many beliefs that we should colonize deep space and Elon Musk’s determination to minimize waste on space travel, together with reborn visions of ‘bolt-on’ accessories to the ISS (the Bigelow expandable module) conjures some imaginative concepts.”
All told, Dr. Braddock considers five principle means for mounting crewed missions to other star systems in his study. These include super-luminal (aka/ FTL) travel, hibernation or stasis regimes, negligible senescence (aka. anti-aging) engineering, world ships capable of supporting multiple generations of travellers (aka. generation ships), and cyogenic freezing technologies.
To break it down succinctly, this method of space travel involves stretching the fabric of space-time in a wave which would (in theory) cause the space ahead of a ship to contract and the space behind it to expand. The ship would then ride this region, known as a “warp bubble”, through space. Since the ship is not moving within the bubble, but is being carried along as the region itself moves, conventional relativistic effects such as time dilation would not apply.
As Dr. Brannock indicates, the advantages of such a propulsion system include being able to achieve “apparent” FTL travel without violating the laws of Relativity. In addition, a ship traveling in a warp bubble would not have to worry about colliding with space debris, and there would be no upper limit to the maximum speed attainable. Unfortunately, the downsides of this method of travel are equally obvious.
These include the fact that there is currently no known methods for creating a warp bubble in a region of space that does not already contain one. In addition, extremely high energies would be required to create this effect, and there is no known way for a ship to exit a warp bubble once it has entered. In short, FTL is a purely theoretical concept for the time being and there are no indications that it will move from theory to practice in the near future.
“The first [strategy] is FTL travel, but the other strategies accept that FTL travel is very theoretical and that one option is to extend human life or to engage in multiple-generational voyages,” said Dr. Braddock. “The latter could be achieved in the future, given the willingness to design a large enough craft and the propulsion technology development to achieve 0.1 x c.”
In other words, the most plausible concepts for interstellar space travel are not likely to achieve speeds of more than ten percent the speed of light – about 29,979,245.8 m / s (~107,925,285 km/h; 67,061,663 mph). This is still a very tall order considering that the fastest mission to date was the Helios 2 mission, which achieved a a maximum velocity of over 66,000 m/s (240,000 km/h; 150,000 mph). Still, this provides a more realistic framework to work within.
Where hibernation and stasis regiments are concerned, the advantages (and disadvantages) are more immediate. For starters, the technology is realizable and has been extensively studies on shorter timescales for both humans and animals. In the latter case, natural hibernation cycles provide the most compelling evidence that hibernation can last for months without incident.
The downsides, however, come down to all the unknowns. For example, there are the likely risks of tissue atrophy resulting from extended periods of time spent in a microgravity environment. This could be mitigated by artificial gravity or other means (such as electrostimulation of muscles), but considerable clinical research is needed before this could be attempted. This raises a whole slew of ethical issues, since such tests would pose their own risks.
Strategies for Engineered Negligible Senescence (SENS) are another avenue, offering the potential for human beings to counter the effects of long-duration spaceflight by reversing the aging process. In addition to ensuring that the same generation that boarded the ship would be the one to make it to its destination, this technique also has the potential to drive stem cell therapy research here on Earth.
However, in the context of long-duration spaceflight, multiple treatments (or continuous ones throughout the travel process) would likely be necessary to achieve full rejuvenation. A considerable amount of research would also be needed beforehand in order to test the process and address the individual components of aging, once again leading to a number of ethical issues.
Then there’s worldships (aka. generation ships), where self-contained and self sustaining spacecraft large enough to accommodate several generations of space travelers would be used. These ships would rely on conventional propulsion and therefore take centuries (or millennia) to reach another star system. The immediate advantages of this concept is that it would fulfill two major goals of space exploration, which would be to maintain a human colony in space and to permit travel to a potentially-habitable exoplanet.
In addition, a generation ship would rely on propulsion concepts that are currently feasible, and a crew of thousands would multiply the chances of successfully colonizing another planet. Of course, the cost of constructing and maintaining such large spaceships would be prohibitive. There are also the moral and ethical challenges of sending human crews into deep space for such extended periods of time.
For instance, is there any guarantee that the crew wouldn’t all go insane and kill each other? And last, there is the fact that newer, more advanced ships would be developed on Earth in the meantime. This means that a faster ship, which would depart Earth later, would be able to overtake a generation ship before it reached another star system. Why spend so much on a ship when it’s likely to become obsolete before it even makes it to its destination?
Last, there is cryogenics, a concept that has been explored extensively in the past few decades as a possible means for life-extension and space travel. In many ways, this concept is an extension of hibernation technology, but benefits from a number of recent advancements. The immediate advantage of this method is that it accounts for all the current limitations imposed by technology and a relativistic Universe.
Basically, it doesn’t matter if FTL (or speeds beyond 0.10 c) are possible or how long a voyage will take since the crew will be asleep and perfectly preserved for the duration. On top of that, we already know the technology works, as demonstrated by recent advancements where organ tissues and even whole organisms were warmed and vitrified after being cryogenically frozen.
However, the risks also greater than with hibernation. For instance, the long-term effects of cryogenic freezing on the physiology and central nervous system of higher-order animals and humans is not yet known. This means that extensive testing and human trials would be needed before it was ever attempted, which once again raises a number of ethical challenges.
In the end, there are a lot of unknowns associated with any and all potential methods of interstellar travel. Similarly, much more research and development is necessary before we can safely say which of them is the most feasible. In the meantime, Dr. Braddock admits that it’s much more likely that any interstellar voyages will involve robotic explorers using telepresence technology to show us other worlds – though these don’t possess the same allure.
“Almost certainly, and this revisits the early concept of von Neumann replication probes (minus the replication!),” he said. “Cube Sats or the like may well achieve this goal but will likely not engage the public imagination nearly as much as human space travel. I believe Sir Martin Rees has suggested the concept of a semi-human AI type device… also some way off.”
Currently, there is only one proposed mission for sending an interstellar space craft to a nearby star system. This would be Breakthrough Starshot, a proposal to send a laser sail-driven nanocraft to Alpha Centauri in just 20 years. After being accelerated to 4,4704,000 m/s (160,934,400 km/h; 100 million mph) 20% the speed of light, this craft would conduct a flyby of Alpha Centauri and also be able to beam home images of Proxima b.
Beyond that, all the missions that involve venturing to the outer Solar System consist of robotic orbiters and probes and all proposed crewed missions are directed at sending astronauts back to the Moon and on to Mars. Still, humanity is just getting started with space exploration and we certainly need to finish exploring our own Solar System before we can contemplate exploring beyond it.
In the end, a lot of time and patience will be needed before we can start to venture beyond the Kuiper Belt and Oort Cloud to see what’s out there.
Looking to the future of crewed space exploration, it is clear to NASA and other space agencies that certain technological requirements need to be met. Not only are a new generation of launch vehicles and space capsules needed (like the SLS and Orion spacecraft), but new forms of energy production are needed to ensure that long-duration missions to the Moon, Mars, and other locations in the Solar System can take place.
One possibility that addresses these concerns is Kilopower, a lightweight fission power system that could power robotic missions, bases and exploration missions. In collaboration with the Department of Energy’s National Nuclear Security Administration (NNSA), NASA recently conducted a successful demonstration of a new nuclear reactor power system that could enable long-duration crewed missions to the Moon, Mars, and beyond.
Known as the Kilopower Reactor Using Stirling Technology (KRUSTY) experiment, the technology was unveiled at a recent news conference on Wednesday, May 2nd, at NASA’s Glenn Research Center. According to NASA, this power system is capable of generating up to 10 kilowatts of electrical power – enough power several households continuously for ten years, or an outpost on the Moon or Mars.
As Jim Reuter, NASA’s acting associate administrator for the Space Technology Mission Directorate (STMD), explained in a recent NASA press release:
“Safe, efficient and plentiful energy will be the key to future robotic and human exploration. I expect the Kilopower project to be an essential part of lunar and Mars power architectures as they evolve.”
The prototype power system employs a small solid uranium-235 reactor core and passive sodium heat pipes to transfer reactor heat to high-efficiency Stirling engines, which convert the heat to electricity. This power system is ideally suited to locations like the Moon, where power generation using solar arrays is difficult because lunar nights are equivalent to 14 days on Earth.
In addition, many plans for lunar exploration involve building outposts in the permanently-shaded polar regions or in stable underground lava tubes. On Mars, sunshine is more plentiful, but subject to the planet’s diurnal cycle and weather (such as dust storms). This technology could therefore ensure a steady supply of power that is not dependent on intermittent sources like sunlight. As Marc Gibson, the lead Kilopower engineer at Glenn, said:
“Kilopower gives us the ability to do much higher power missions, and to explore the shadowed craters of the Moon. When we start sending astronauts for long stays on the Moon and to other planets, that’s going to require a new class of power that we’ve never needed before.”
The Kilopower experiment was conducted at the NNSA’s Nevada National Security Site (NNSS) between November and March of 2017. In addition to demonstrating that the system could produce electricity through fission, the purpose of the experiment was also to show that it is stable and safe in any environment. For this reason, the Kilopower team conduct in the experiment in four phases.
The first two phases, which were conducted without power, confirmed that each component in the system functioned properly. For the third phase, the team increased power to heat the core slowly before moving on to phase four, which consisted of a 28-hour, full-power test run. This phase simulated all stages of a mission, which included a reactor startup, ramp up to full power, steady operation and shutdown.
Throughout the experiment, the team simulated various system failures to ensure that the system would keep working – which included power reductions, failed engines and failed heat pipe. Throughout, the KRUSTY generator kept on providing electricity, proving that it can endure whatever space exploration throws at it. As Gibson indicated:
“We put the system through its paces. We understand the reactor very well, and this test proved that the system works the way we designed it to work. No matter what environment we expose it to, the reactor performs very well.”
Looking ahead, the Kilopower project will remain a part of NASA’s Game Changing Development (GCD) program. As part of NASA’s Space Technology Mission Directorate (STMD), this program’s goal is to advance space technologies that may lead to entirely new approaches for the Agency’s future space missions. Eventually, the team hopes to make the transition to the Technology Demonstration Mission (TDM) program by 2020.
If all goes well, the KRUSTY reactor could allow for permanent human outposts on the Moon and Mars. It could also offer support to missions that rely on In-situ Resource Utilization (ISRU) to produce hydrazine fuel from local sources of water ice, and building materials from local regolith.
Basically, when robotic missions are mounted to the Moon to 3D print bases out of local regolith, and astronauts begin making regular trips to the Moon to conduct research and experiments (like they do today to the International Space Station), it could be KRUSTY reactors that provide them will all their power needs. In a few decades, the same could be true for Mars and even locations in the outer Solar System.
The team behind this concept is led by Dmitri Savransky, an assistant professor of mechanical and aerospace engineering at Cornell University. Along with 15 colleagues from across the US, Savransky has produced a concept for a ~30 meter (100 foot) modular space telescope with adaptive optics. But the real kicker is the fact that it would be made up of a swarm of modules that would assemble themselves autonomously.
Prof. Savransky is well-versed in space telescopes and exoplanet hunting, having assisted in the integration and testing of the Gemini Planet Imager – an instrument on the Gemini South Telescope in Chile. He also participated in the planning of the Gemini Planet Imager Exoplanet Survey, which discovered a Jupiter-like planet orbiting 51 Eridani (51 Eridani b) in 2015.
But looking to the future, Prof. Savransky believes that self-assembly is the way to go to create a super telescope. As he and his team described the telescope in their proposal:
“The entire structure of the telescope, including the primary and secondary mirrors, secondary support structure and planar sunshield will be constructed from a single, mass-produced spacecraft module. Each module will be composed of a hexagonal ~1 m diameter spacecraft topped with an edge-to-edge, active mirror assembly.”
These modules would be launched independently and then navigate to the Sun-Earth L2 point using deployable solar sails. These sails will then become the planar telescope sunshield once the modules come together and assemble themselves, without the need for human or robotic assistance. While this may sound radically advanced, it is certainly in keeping with what the NIAC looks for.
“That’s what the NIAC program is,” said Dr. Savransky in recent interview with the Cornell Chronicle. “You pitch these somewhat crazy-sounding ideas, but then try to back them up with a few initial calculations, and then it’s a nine-month project where you’re trying to answer feasibility questions.”
As part of the 2018 NAIC’s Phase I awards, which were announced on March 30th, the team was awarded $125,000 over a nine month period to conduct these studies. If these are successful, the team will be able to apply for a Phase II award. As Mason Peck, an associate professor of mechanical and aerospace engineering at Cornell and the former chief technology officer at NASA, indicated, Savransky is on the right track with his NIAC proposal:
“As autonomous spacecraft become more common, and as we continue to improve how we build very small spacecraft, it makes a lot of sense to ask Savransky’s question: Is it possible to build a space telescope that can see farther, and better, using only inexpensive small components that self-assemble in orbit?”
The target mission for this concept is the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR), a proposal that is currently being explored as part of NASA’s 2020 Decadal Survey. As one of two concepts being investigated by NASA’s Goddard Space Flight Center, this mission concept calls for a space telescope with a massive segmented primary mirror that measures about 15 meters (49 feet) in diameter.
Much like the JWST, LUVOIR’s mirror would be made up of adjustable segments that would unfold once it deployed to space. Actuators and motors would actively adjust and align these segments in order to achieve the perfect focus and capture light from faint and distant objects. The primary aim of this mission would be to discover new exoplanets as well as analyze light from those that have already been discovered to asses their atmospheres.
As Savransky and his colleagues indicated in their proposal, their concept is directly in line with the priorities of the NASA Technology Roadmaps in Science Instruments, Observatories, and Sensor Systems and Robotics and Autonomous Systems. They also state that the architecture is a credible means to construct a giant space telescope, which would not be possible for previous generations of telescopes like Hubble and the JWST.
“James Webb is going to be the largest astrophysical observatory we’ve ever put in space, and it’s incredibly difficult,” he said. “So going up in scale, to 10 meters or 12 meters or potentially even 30 meters, it seems almost impossible to conceive how you would build those telescopes the same way we’ve been building them.”
Having been granted a Phase I award, the team is planning to conduct detailed simulations of how the modules would fly through space and rendezvous with each other to determine how large the solar sails need to be. They also plan to conduct an analysis of the mirror assembly to validate that the modules could achieve the required surface figure once assembled.
As Peck indicated, if successful, Dr. Savransky’s proposal could be a game changer:
“If Professor Savransky proves the feasibility of creating a large space telescope from tiny pieces, he’ll change how we explore space. We’ll be able to afford to see farther, and better than ever – maybe even to the surface of an extrasolar planet.”
When it comes to the future of space exploration, one of the greatest challenges is coming up with engines that can maximize performance while also ensuring fuel efficiency. This will not only reduce the cost of individual missions, it will ensure that robotic spacecraft (and even crewed spacecraft) can operate for extended periods of time in space without having to refuel.
In recent years, this challenge has led to some truly innovative concepts, one of which was recently build and tested for the very first time by an ESA team. This engine concept consists of an electric thruster that is capable of “scooping” scarce air molecules from the tops of atmospheres and using them as propellant. This development will open the way for all kinds of satellites that can operate in very low orbits around planets for years at a time.
The concept of an air-breathing thruster (aka. Ram-Electric Propulsion) is relatively simple. In short, the engine works on the same principles as a ramscoop (where interstellar hydrogen is collected to provide fuel) and an ion engine – where collected particles are charged and ejected. Such an engine would do away with onboard propellant by taking in atmospheric molecules as it passed through the top of a planet’s atmosphere.
The study’s authors also indicated how satellites using high specific impulse electric propulsion would be capable of compensating for drag during low altitude operation for an extended period of time. But as they conclude, such a mission would also be limited to the amount of fuel it could carry. This was certainly the case for the ESA’s Gravity field and steady-state Ocean Circulation Explorer (GOCE) gravity-mapper satellite,
While GOCE remained in orbit of Earth for more than four years and operated at altitudes as low as 250 km (155 mi), its mission ended the moment it exhausted its 40 kg (88 lbs) supply of xenon as propellant. As such, the concept of an electric propulsion system that an utilize atmospheric molecules as propellant has also been investigated. As Dr. Louis Walpot of the ESA explained in an ESA press release:
“This project began with a novel design to scoop up air molecules as propellant from the top of Earth’s atmosphere at around 200 km altitude with a typical speed of 7.8 km/s.”
To develop this concept, the Italian aerospace company Sitael and the Polish aerospace company QuinteScience teamed up to create a novel intake and thruster design. Whereas QuinteScience built an intake that would collect and compress incoming atmospheric particles, Sitael developed a dual-stage thruster that would charge and accelerate these particles to generate thrust.
The team then ran computer simulations to see how particles would behave across a range of intake options. But in the end, they chose to conduct a practice test to see if the combined intake and thruster would work together or not. To do this, the team tested it in a vacuum chamber at one of Sitael’s test facilities. The chamber simulated an environment at 200 km altitude while a “particle flow generator” provided the oncoming high-speed molecules.
To provide a more complete test and make sure the thruster would function in a low-pressure environment, the team began by igniting it with xenon-propellant. As Dr. Walpot explained:
“Instead of simply measuring the resulting density at the collector to check the intake design, we decided to attach an electric thruster. In this way, we proved that we could indeed collect and compress the air molecules to a level where thruster ignition could take place, and measure the actual thrust. At first we checked our thruster could be ignited repeatedly with xenon gathered from the particle beam generator.”
As a next step, the team partially replace xenon with a nitrogen-oxygen air mixture to simulate Earth’s upper atmosphere. As hoped, the engine kept firing, and the only thing that changed was the color of the thrust.
“When the xenon-based blue color of the engine plume changed to purple, we knew we’d succeeded,” said Dr. Walpot. “The system was finally ignited repeatedly solely with atmospheric propellant to prove the concept’s feasibility. This result means air-breathing electric propulsion is no longer simply a theory but a tangible, working concept, ready to be developed, to serve one day as the basis of a new class of missions.”
The development of air-breathing electric thrusters could allow for an entirely new class of satellite that could operate with the fringes of Mars’, Titan’s and other bodies atmospheres for years at a time. With this kind of operational lifespan, these satellites could gather volumes of data on these bodies’ meteorological conditions, seasonal changes, and the history of their climates.
Such satellites would also be very useful when it comes to observing Earth. Since they would be able to operate at lower altitudes than previous missions, and would not be limited by the amount of propellant they could carry, satellites equipped with air-breathing thrusters could operate for extended periods of time. As a result, they could offer more in-depth analyses on Climate Change, and monitor meteorological patterns, geological changes, and natural disasters more closely.