Does the life of an astronomer or planetary scientists seem exciting?
Sitting in an observatory, sipping warm cocoa, with high-tech tools at your disposal as you work diligently, surfing along on the wavefront of human knowledge, surrounded by fine, bright people. Then one day—Eureka!—all your hard work and the work of your colleagues pays off, and you deliver to humanity a critical piece of knowledge. A chunk of knowledge that settles a scientific debate, or that ties a nice bow on a burgeoning theory, bringing it all together. Conferences…tenure…Nobel Prize?
Well, maybe in your first year of university you might imagine something like that. But science is work. And as we all know, not every minute of one’s working life is super-exciting and gratifying.
As exciting and thrilling as it is to watch all the historic footage from the Apollo Moon landings, you have to admit, the quality is sometimes not all that great. Even though NASA has worked on restoring and enhancing some of the most popular Apollo footage, some of it is still grainy or blurry — which is indicative of the video technology available in the 1960s.
But now, new developments in artificial intelligence have come to the rescue, providing viewers a nearly brand new experience in watching historic Apollo video.
A photo and film restoration specialist, who goes by the name of DutchSteamMachine, has worked some AI magic to enhance original Apollo film, creating strikingly clear and vivid video clips and images.
We all know how exploration by rover works. The rover is directed to a location and told to take a sample. Then it subjects that sample to analysis and sends home the results. It’s been remarkably effective.
But it’s expensive and time-consuming to send all this data home. Will this way of doing things still work? Or can it be automated?
Picture two tissue box-sized spacecraft orbiting Earth.
Then picture them communicating, and using a water-powered thruster to approach each other. If you can do that, then you’re up to speed on one of the activities of NASA’s Small Spacecraft Technology Program (SSTP.) It’s all part of NASA’s effort to develop small spacecraft to serve their space exploration, science, space operations, and aeronautics endeavors.
How in the world could you possibly look inside a star? You could break out the scalpels and other tools of the surgical trade, but good luck getting within a few million kilometers of the surface before your skin melts off. The stars of our universe hide their secrets very well, but astronomers can outmatch their cleverness and have found ways to peer into their hearts using, of all things, sound waves. Continue reading “Scientists are Using Artificial Intelligence to See Inside Stars Using Sound Waves”
If something called “Project METERON” sounds to you like a sinister project involving astronauts, robots, the International Space Station, and artificial intelligence, I don’t blame you. Because that’s what it is (except for the sinister part.) In fact, the Meteron Project (Multi-Purpose End-to-End Robotic Operation Network) is not sinister at all, but a friendly collaboration between the European Space Agency (ESA) and the German Aerospace Center (DLR.)
The idea behind the project is to place an artificially intelligent robot here on Earth under the direct control of an astronaut 400 km above the Earth, and to get the two to work together.
“Artificial intelligence allows the robot to perform many tasks independently, making us less susceptible to communication delays that would make continuous control more difficult at such a great distance.” – Neil Lii, DLR Project Manager.
On March 2nd, engineers at the DLR Institute of Robotics and Mechatronics set up the robot called Justin in a simulated Martian environment. Justin was given a simulated task to carry out, with as few instructions as necessary. The maintenance of solar panels was the chosen task, since they’re common on landers and rovers, and since Mars can get kind of dusty.
The first test of the METERON Project was done in August. But this latest test was more demanding for both the robot and the astronaut issuing the commands. The pair had worked together before, but since then, Justin was programmed with more abstract commands that the operator could choose from.
American astronaut Scott Tingle issued commands to Justin from a tablet aboard the ISS, and the same tablet also displayed what Justin was seeing. The human-robot team had practiced together before, but this test was designed to push the pair into more challenging tasks. Tingle had no advance knowledge of the tasks in the test, and he also had no advance knowledge of Justin’s new capabilities. On-board the ISS, Tingle quickly realized that the panels in the simulation down here were dusty. They were also not pointed in the optimal direction.
This was a new situation for Tingle and for Justin, and Tingle had to choose from a range of commands on the tablet. The team on the ground monitored his choices. The level of complexity meant that Justin couldn’t just perform the task and report it completed, it meant that Tingle and the robot also had to estimate how clean the panels were after being cleaned.
“Our team closely observed how the astronaut accomplished these tasks, without being aware of these problems in advance and without any knowledge of the robot’s new capabilities,” says DLR engineer Daniel Leidner.
The next test will take place in Summer 2018 and will push the system even further. Justin will have an even more complex task before him, in this case selecting a component on behalf of the astronaut and installing it on the solar panels. The German ESA astronaut Alexander Gerst will be the operator.
If the whole point of this is not immediately clear to you, think Mars exploration. We have rovers and landers working on the surface of Mars to study the planet in increasing detail. And one day, humans will visit the planet. But right now, we’re restricted to surface craft being controlled from Earth.
What METERON and other endeavours like it are doing, is developing robots that can do our work for us. But they’ll be smart robots that don’t need to be told every little thing. They are just given a task and they go about doing it. And the humans issuing the commands could be in orbit around Mars, rather than being exposed to all the risks on the surface.
“Artificial intelligence allows the robot to perform many tasks independently, making us less susceptible to communication delays that would make continuous control more difficult at such a great distance,” explained Neil Lii, DLR Project Manager. “And we also reduce the workload of the astronaut, who can transfer tasks to the robot.” To do this, however, astronauts and robots must cooperate seamlessly and also complement one another.
That’s why these tests are important. Getting the astronaut and the robot to perform well together is critical.
“This is a significant step closer to a manned planetary mission with robotic support,” says Alin Albu-Schäffer, head of the DLR Institute of Robotics and Mechatronics. It’s expensive and risky to maintain a human presence on the surface of Mars. Why risk human life to perform tasks like cleaning solar panels?
“The astronaut would therefore not be exposed to the risk of landing, and we could use more robotic assistants to build and maintain infrastructure, for example, with limited human resources.” In this scenario, the robot would no longer simply be the extended arm of the astronaut: “It would be more like a partner on the ground.”
Gravitational lenses are an important tool for astronomers seeking to study the most distant objects in the Universe. This technique involves using a massive cluster of matter (usually a galaxy or cluster) between a distant light source and an observer to better see light coming from that source. In an effect that was predicted by Einstein’s Theory of General Relativity, this allows astronomers to see objects that might otherwise be obscured.
Recently, a group of European astronomers developed a method for finding gravitational lenses in enormous piles of data. Using the same artificial intelligence algorithms that Google, Facebook and Tesla have used for their purposes, they were able to find 56 new gravitational lensing candidates from a massive astronomical survey. This method could eliminate the need for astronomers to conduct visual inspections of astronomical images.
While useful to astronomers, gravitational lenses are a pain to find. Ordinarily, this would consist of astronomers sorting through thousands of images snapped by telescopes and observatories. While academic institutions are able to rely on amateur astronomers and citizen astronomers like never before, there is imply no way to keep up with millions of images that are being regularly captured by instruments around the world.
To address this, Dr. Petrillo and his colleagues turned to what are known as “Convulutional Neural Networks” (CNN), a type of machine-learning algorithm that mines data for specific patterns. While Google used these same neural networks to win a match of Go against the world champion, Facebook uses them to recognize things in images posted on its site, and Tesla has been using them to develop self-driving cars.
“This is the first time a convolutional neural network has been used to find peculiar objects in an astronomical survey. I think it will become the norm since future astronomical surveys will produce an enormous quantity of data which will be necessary to inspect. We don’t have enough astronomers to cope with this.”
The team then applied these neural networks to data derived from the Kilo-Degree Survey (KiDS). This project relies on the VLT Survey Telescope (VST) at the ESO’s Paranal Observatory in Chile to map 1500 square degrees of the southern night sky. This data set consisted of 21,789 color images collected by the VST’s OmegaCAM, a multiband instrument developed by a consortium of European scientist in conjunction with the ESO.
These images all contained examples of Luminous Red Galaxies (LRGs), three of which wee known to be gravitational lenses. Initially, the neural network found 761 gravitational lens candidates within this sample. After inspecting these candidates visually, the team was able to narrow the list down to 56 lenses. These still need to be confirmed by space telescopes in the future, but the results were quite positive.
As they indicate in their study, such a neural network, when applied to larger data sets, could reveal hundreds or even thousands of new lenses:
“A conservative estimate based on our results shows that with our proposed method it should be possible to find ?100 massive LRG-galaxy lenses at z ~> 0.4 in KiDS when completed. In the most optimistic scenario this number can grow considerably (to maximally ? 2400 lenses), when widening the colour-magnitude selection and training the CNN to recognize smaller image-separation lens systems.”
In addition, the neural network rediscovered two of the known lenses in the data set, but missed the third one. However, this was due to the fact that this lens was particularly small and the neural network was not trained to detect lenses of this size. In the future, the researchers hope to correct for this by training their neural network to notice smaller lenses and rejects false positives.
But of course, the ultimate goal here is to remove the need for visual inspection entirely. In so doing, astronomers would be freed up from having to do grunt work, and could dedicate more time towards the process of discovery. In much the same way, machine learning algorithms could be used to search through astronomical data for signals of gravitational waves and exoplanets.
Much like how other industries are seeking to make sense out of terabytes of consumer or other types of “big data”, the field astrophysics and cosmology could come to rely on artificial intelligence to find the patterns in a Universe of raw data. And the payoff is likely to be nothing less than an accelerated process of discovery.
If you’re thinking of having yourself cryogenically suspended and awakened in some future paradise, you might want to set your alarm clock for no later than 1,000 years from now. According to the BBC, Stephen Hawking will be saying this much in the 2016 Reith Lectures – a series of lectures organized by the BBC that explore the big challenges faced by humanity.
In Hawking’s first lecture, which will be broadcast on February 26th on the BBC, Hawking covers the topic of black holes, whether or not they have hair, and other concepts about these baffling objects.
But at the end of the lecture, he responded to audience questions about humanity’s capacity for self destruction. Hawking said that 1,000 years might be all we have until we meet our demise at the hands of our own scientific and technological advances.
As we have become increasingly advanced both scientifically and technologically, Hawking says, we will be creating “new ways that things can go wrong.” Hawking mentioned nuclear war, global warming, and genetically engineered viruses as things that could cause our extinction.
Through the Cold War, annihilation at the hands of our own nuclear weapons was a real danger. The threat of a nuclear launch in response to a real or perceived threat was real. The resulting retaliation and counter-retaliation was a risk faced by everyone on the planet. And the two superpowers had enough warheads between them to potentially wipe out life on Earth.
The USA and the USSR have reduced their stockpiles of nuclear weapons in recent decades, but there are still enough warheads around to wipe us out. The possibility of a rogue state like North Korea setting off a nuclear confrontation is still very real. By the time Hawking’s 1,000 year time-frame has passed, we’ll either have solved this problem, or we won’t be here.
Earth is getting warmer, and though the Earth has warmed and cooled many times in its history, this time we only have ourselves to blame. We’ve been inadvertently enriching our atmosphere with carbon since the Industrial Revolution. All that carbon is creating a nice insulating layer around Earth, as it traps heat that would normally radiate into space. If we reach some of the “tipping points” that scientists talk about, like the melting of permafrost and the subsequent release of methane, we could be in real trouble.
Different climate engineering schemes have been thought up to counteract global warming, like seeding the upper atmosphere with reflective molecules, and having fleets of ships around the equator spraying sea mist into the air to partially block out the sun. Or even extracting carbon from the atmosphere. But how realistic or effective those counter-measures might be is not clear.
Genetically Engineered Viruses
As a weapon, a virus can be cheap and effective. There’ve been programs in the past to develop biological weapons. The temptation to use genetic science to create extremely deadly viruses may prove too great.
Smallpox and Viral Hemorrhagic Fevers have been weaponized, and as our genetic manipulation abilities grow, it’s possible, or even likely, that somebody somewhere will attempt develop even more dangerous viral weapons. They may be doing it right now.
Hawking never mentioned AI in his talk, but it fits in with the discussion. As our machines get smarter and smarter, will they deduce that the only chance for survival is to remove or reduce the human population? Who knows. But Hawking himself, as well as other thinkers, have been warning us that there may be a catastrophic downside to our achievements in AI.
We may love the idea of driverless cars, and computer assistants like SIRI. But as numerous science fiction stories have warned us (Skynet in the Terminator series being my favorite,) it may be a small step from very helpful AI that protects us and makes our lives easier, to AI that decides existence would be a whole lot better without us pesky humans around.
The Technological Singularity is the point at which artificially intelligent systems “wake up” and become—more or less—conscious. These AI machines would start to improve themselves recursively, or build better and smarter machines. At this point, they would be a serious danger to humanity.
Drones are super popular right now. They flew off the shelves at Christmas, and they’re great toys. But once we start seeing drones with primitive but effective AI, patrolling the property of the wealthy, it’ll be time to start getting nervous.
Extinction May Have To Wait
As our scientific and technological prowess grows, we’ll definitely face new threats, just like Hawking says. But, that same progress may also protect us, or make us more resilient. Hawking says, “We are not going to stop making progress, or reverse it, so we have to recognise the dangers and control them. I’m an optimist, and I believe we can.” So do we.
Maybe you’ll be able to hit the snooze button after all.