Space biology experiments have just arrived in the classroom. With a focus on hundreds of K-12 students, a University of Colorado, Boulder payload will be launched on board Space Shuttle Endeavour on November 14th carrying spiders and butterfly larvae. The purpose? To provide an educational research tool for youngsters, helping to develop their interest in biology and space science. The butterfly larvae will be studied over their complete life cycle in space; from larvae to pupae to butterfly to egg. Web-building spiders will be studied to see how their behaviour alters when lacking gravity. Both sets of experiments will then be compared with control subjects on the ground… I wish I had the chance to do this kind of research when in school. I wish I had the chance to do this kind of research now!
“This program is an excellent example of using a national asset like the International Space Station to inspire K-12 students in science, technology, engineering and math,” said BioServe Director Louis Stodieck, principal investigator on the project. BioServe has flown two previous K-12 payloads as part of their CSI program on other shuttle flights to the International Space Station (ISS).
This particular experiment will study the activities and feeding habits of web-building spiders when in space, compared to spiders in the classroom. The hundreds of students from several locations in the US are involved in the project and will learn valuable research techniques along with boosting their interest in the sciences. After all, it isn’t every day you get a chance to carry out cutting-edge research on the world’s most extreme science laboratory!
The second set of experiments will be another space/Earth comparison, but this time a study of the full lifespan of painted lady butterflies. Four-day old pupae will be launched into space and watched via downlink video, still images and data from the ISS. Partners in the project include the Denver Museum of Nature and Science, the Butterfly Pavilion in Westminster, CO and the Baylor College of Medicine’s Center for Education Outeach.
BioServe is a non-profit, NASA funded organization hoping to include payloads on each of the remaining shuttle flights until retirement. “Between now and then, we are seeking sponsors for our educational payloads to enhance the learning opportunities for the K-12 community in Colorado and around the world,” added BioServe Payload Mission Manager Stefanie Countryman.
This is where the strength of the International Space Station really comes into play. Real science being carried out by schools in the US to boost interest not only in space travel, but biology too. It’s a relief, I was getting a little tired hearing about busted toilets, interesting yet pointless boomerang “experiments”, more tests on sprouting seeds and the general discontent about the ISS being an anticlimax.
Let’s hope BioServe’s projects turn out well and all the students involved are inspired by the opportunities of space travel. Although I can’t help but feel sorry for the confused spiders and butterfly larvae when they realise there’s no “up” any more (I hope they don’t get space sick).
We already know that the Large Hadron Collider (LHC) will be the biggest, most expensive physics experiment ever carried out by mankind. Colliding relativistic particles at energies previously unimaginable (up to the 14 TeV mark by the end of the decade) will generate millions of particles (known and as yet to be discovered), that need to be tracked and characterized by huge particle detectors. This historic experiment will require a massive data collection and storage effort, re-writing the rules of data handling. Every five seconds, LHC collisions will generate the equivalent of a DVD-worth of data, that’s a data production rate of one gigabyte per second. To put this into perspective, an average household computer with a very good connection may be able to download data at a rate of one or two megabytes per second (if you are very lucky! I get 500 kilobytes/second). So, LHC engineers have designed a new kind of data handling method that can store and distribute petabytes (million-gigabytes) of data to LHC collaborators worldwide (without getting old and grey whilst waiting for a download).
In 1990, the European Organization for Nuclear Research (CERN) revolutionized the way in which we live. The previous year, Tim Berners-Lee, a CERN physicist, wrote a proposal for electronic information management. He put forward the idea that information could be transferred easily over the Internet using something called “hypertext.” As time went on Berners-Lee and collaborator Robert Cailliau, a systems engineer also at CERN, pieced together a single information network to help CERN scientists collaborate and share information from their personal computers without having to save it on cumbersome storage devices. Hypertext enabled users to browse and share text via web pages using hyperlinks. Berners-Lee then went on to create a browser-editor and soon realised this new form of communication could be shared by vast numbers of people. By May 1990, the CERN scientists called this new collaborative network the World Wide Web. In fact, CERN was responsible for the world’s first website: http://info.cern.ch/ and an early example of what this site looked like can be found via the World Wide Web Consortium website.
So CERN is no stranger to managing data over the Internet, but the brand new LHC will require special treatment. As highlighted by David Bader, executive director of high performance computing at the Georgia Institute of Technology, the current bandwidth allowed by the Internet is a huge bottleneck, making other forms of data sharing more desirable. “If I look at the LHC and what it’s doing for the future, the one thing that the Web hasn’t been able to do is manage a phenomenal wealth of data,” he said, meaning that it is easier to save large datasets on terabyte hard drives and then send them in the post to collaborators. Although CERN had addressed the collaborative nature of data sharing on the World Wide Web, the data the LHC will generate will easily overload the small bandwidths currently available.
This is why the LHC Computing Grid was designed. The grid handles vast LHC dataset production in tiers, the first (Tier 0) is located on-site at CERN near Geneva, Switzerland. Tier 0 consists of a huge parallel computer network containing 100,000 advanced CPUs that have been set up to immediately store and manage the raw data (1s and 0s of binary code) pumped out by the LHC. It is worth noting at this point, that not all the particle collisions will be detected by the sensors, only a very small fraction can be captured. Although only a comparatively small number of particles may be detected, this still translates into huge output.
Tier 0 manages portions of the data outputted by blasting it through dedicated 10 gigabit-per-second fibre optic lines to 11 Tier 1 sites across North America, Asia and Europe. This allows collaborators such as the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory in New York to analyse data from the ALICE experiment, comparing results from the LHC lead ion collisions with their own heavy ion collision results.
From the Tier 1 international computers, datasets are packaged and sent to 140 Tier 2 computer networks located at universities, laboratories and private companies around the world. It is at this point that scientists will have access to the datasets to perform the conversion from the raw binary code into usable information about particle energies and trajectories.
The tier system is all well and good, but it wouldn’t work without a highly efficient type of software called “middleware.” When trying to access data, the user may want information that is spread throughout the petabytes of data on different servers in different formats. An open-source middleware platform called Globus will have the huge responsibility to gather the required information seamlessly as if that information is already sitting inside the researcher’s computer.
It is this combination of the tier system, fast connection and ingenious software that could be expanded beyond the LHC project. In a world where everything is becoming “on demand,” this kind of technology could make the Internet transparent to the end user. There would be instant access to everything from data produced by experiments on the other side of the planet, to viewing high definition movies without waiting for the download progress bar. Much like Berners-Lee’s invention of HTML, the LHC Computing Grid may revolutionize how we use the Internet.
On Wednesday August 27th, at 9 p.m. ET/PT in the US, the famed “Mythbusters” on the Discovery Channel will take on one of the biggest myths ever: the belief the Apollo Moon landings were faked. Some folks who lived through the 1960’s never believed the moon landings actually happened, and some how this belief persisted. In 2001 the Fox Channel aired a show “Conspiracy Theory: Did We Land on the Moon?” and the belief grew. But now the Mythbusters take on the HB’s (hoax believers) who say they have scientific evidence the moon landings were faked. Adam and Jamie will fight bad science with their usual good science. The results? We’ll have to wait and see until tonight. But here’s a preview:
Puzzled about particle physics? Want to know what the inside of the Large Hadron Collider looks like? Like music, fun and science? Want to know for sure the LHC won’t create a black hole that will swallow the Earth? Find all of the above in a rap song created by Kate McAlpine, 23, who used to work in the press office of CERN, where on September 10, the LHC will be powered up. The song has been a hit on You Tube, and has been downloaded over 400,000 times. Physicists say the science in the song is “spot on” and provides a rhythmic tour of the mysteries of modern physics and the workings of the LHC, while noting that “the things that it discovers will rock you in the head.” Without further ado, here it is:
McAlpine wrote the rap during her 40-minute morning commute to CERN. “Some more academic people are not too happy and they think it kind of cheapens the science and dumbs it down,” she says. “But I think mostly people are excited to have this rap out there. And a lot of people at CERN just think it’s great, so that’s exciting.”
You seem to like a nice series, so here’s a new one Fraser and Pamela have been thinking about. Over the course of the next 4 weeks, they’re going to cover each of the basic forces in the Universe. And this week, they’re going to start with gravity; the force you’re most familiar with. Gravity happens when masses attract one another, and we can calculate its effect with exquisite precision. But you might be surprised to know that scientists have no idea why gravity happens
A debate today between astronomer Neil deGrasse Tyson and planetary scientist Mark Sykes, moderated by NPR’s Ira Flatow, addressed the issue of Pluto’s planetary status. There was lots of arm-waving and finger-pointing, endless interruptions, disagreements on details big and small, and battling one-liners. The two scientists sat at a table with the moderator between them and Flatow was often obscured by Tyson and Sykes getting in each other’s faces in eye-to-eye confrontation. At one point, Flatow was hit by Tyson’s ebullient arm motions. Yes, it was heated. But it was fun, too. It ended up being not so much a debate between the Pluto-huggers and the Pluto-haters as a disagreement over the lexicon of astronomy and planetary science and, primarily, the definition of a planet. Pluto’s planetary status was definitely not decided here, and the debate concluded with an amicable agree-to-disagree concurrence that the scientific process is an ongoing, evolving practice. But it wasn’t without fireworks.
At the start of the Great Planet Debate, Flatow laid down the ground rules, which included no throwing of perishable items, but that was about the only rule that didn’t get disregarded. Tyson, director of the Hayden Planetarium in New York and host of Nova ScienceNow, and who is in the camp that Pluto is not a planet, began his opening statements with “It’s simple. The word ‘planet’ has lost all scientific value.” He went on, saying “planet” doesn’t tell you much and you have to ask all sorts of questions such as is it big or small, rocky or gaseous, in the habitable zone or not, etc. “If you have to ask twenty questions after I say I’ve discovered a planet, the word has lost its utility.” Tyson said “planet” had utility far back in time when there wasn’t much else we knew about, but we know so much more now. “If we’re going rely on one word and put them all in one pot, what are we doing as scientists and educators? The time has come to discard the useless words and invent a whole new system to respect the level of science we have achievedâ€¦We’re in desperate need of a new lexicon to accommodate this knowledge,” he said.
Sykes, director of the Planetary Science Institute, and who believes Pluto should be reinstated as a planet, began, “How we categorize things is part of the science process. It is natural for humans to group things together with common characteristics as a tool to better understand and how they work. This applies to biology and astronomy as well.” He continued that we have discovered planets around other stars and continue to find Kuiper Belt objects that will need to be classified, so classifying objects is not a useless task. The IAU (International Astronomical Union) bit the bullet and decided on a classification, but unfortunately, Sykes said, what they came up with was not very useful.
That was the end of decorum, as Tyson interrupted with, “You wanted a definition. They gave you a definition and now you’re complaining about it!”
“Absolutely,” said Sykes, wanting to continue, but Tyson quickly chimed in, “And let me addâ€¦”, where Sykes butted in with “You have to let me start before you add!”
Flatow looked around and said, “I think I’m in a danger zone here.”
Thus began the debate.
Sykes said that any definition has to have a reason, or a purpose. According to the IAU’s definition, planets have to orbit the sun, they have to be round, and they have to have cleared their orbits, among other things. There was immediate confusion with this definition, which Sykes said was a little “goofy.” In order to be a planet, an object has be bigger the farther away it is from the sun, and it ignores the physical characteristics. He believes it’s useful to group things together that are similar and then have subcategories. So, you have planets, under which are terrestrial, gas giants, ice planets, etc.
Tyson said that even for him, the IAU’s definition falls short of taking the total amount of information to task. “If you only want to call round things planets, that puts Pluto in the same class as Jupiter. I happen to like round things. But what other lexicon might be available to group similar things together?”
“That’s why god made subcategories,” said Sykes. “It’s good to have a good general starting point for classifying things.”
Tyson humorously pointed out this debate is big only in the US, which he attributed to Disney’s creation of the lovable, dimwitted cartoon bloodhound named Pluto. School kids, grownups, op-ed writers all say Pluto is their favorite planet. “I am certain that the word ‘plutocracy’ is traceable to what Disney has done, so it’s hard to extricate the sentiment we have for the planet from the dog.”
Sykes said the IAU didn’t expand our perspective on planets, but narrow it. “The planet count went down, and what was the justification of that? The proponents have never given a good explanation of what was motivating that perspective.”
Tyson said numbers aren’t important, but words and definitions are, and we definitely need new ones.
Both scientists gave good arguments for their cause, and since I’m decidedly on the fence with this issue, I found myself leaning towards one option or the other, as each one spoke. Sykes, who wants to see Pluto reinstated as a planet, wants to take what we have and make it better, while Tyson, who thinks Pluto is a comet, wants to start over with new and better words and definitions.
It was an entertaining and educational debate with two well-spoken and intelligent scientists who sometimes weren’t very polite, however. (Sykes said, “When were’ not fighting we get along fine.”) The most important thing, they both agreed though, was that scientists are actually talking about this issue in the public eye and people are interested. But more importantly, the public is seeing the scientific process in action. They said this debate shouldn’t be about making things easy, or worrying about “not confusing the public.” Learning science shouldn’t be rote memorization of lists of objects, but a discussion of how objects are similar and different. “My recommendation to school teachers,” said Tyson “is to get the notion of counting things out of your system and comb the solar system for the richness of objects. Ask about different ways to combine the different objects in our solar system and have a discussion about their different properties.”
The debate will be available online, and we’ll post a link to it here when it is.
Sykes ended with his closing argument: “We both have issues with what happened with the IAU, its part of an ongoing presentation, but the important things is that the public gets to see the debate, and it’s not a battle over what list and what numbers you have, but the debate of the issues. That’s more important whether either of us have convinced you of one perspective. Science in this country is too much memorizing lists promulgated by those in authority. This is helping to expose the messy side of science. This debate is good and positive.”
Tyson ended by saying how charmed he is at the level of public interest in this subject. “How many sciences get to have their issues debated in the op-ed pages and comics?” He said he was happy with the word “planet” until all the data started pouring in from our explorations. “There should be a way to celebrate a new way to think about things. There ought to be a way to capture that” he said.
Obviously, this is not the last word on the subject from either scientist, or either side of the debate.
Former NASA astronaut and Rocketplane test pilot John Herrington has a new state-of-the-art vehicle of choice: a bike. But itâ€™s a touring bike fully loaded with a GPS, laptop, broadband phone, and digital and video cameras. Herrington is embarking on a cross country bike trek to promote and encourage student participation in science, technology, engineering and mathematics (STEM). Herrington, once a college dropout who went on to fly in space in 2002 on the STS-113 mission, hopes he can help make a difference and impact on children by sharing his experiences and providing web-based, hands-on activities using STEM skills to solve problems while following his journey. Herrington also wants to encourage children to pursue their dreams and seek out exciting opportunities. “The generation that grew up in the age of the Apollo program and the journey to the moon was motivated by the excitement of space and the possibilities that it brought to the nation,” said Herrington. “Those kinds of possibilities to explore the unknown and make new discoveries still exist, but we must motivate students to learn and have a way to connect what they learn to what they do on a daily basis.”
Herrington began his coast-to-coast tour yesterday (August 13) from Cape Flattery in Washington state and will finish at Cape Canaveral in Florida. The trip is expected to take three months, and Herrington will stop at schools along the way to talk about his “journey to the space program, the wonders of flying in space and the need for students to realize their potential that lies within,” he said.
Students can log into Herrington’s blog for daily updates and new problem solving challenges. Herrington, the first Native American in space will be especially focusing on Native students, hoping to kindle imagination and motivation. “I was once an unmotivated student, looking for something that sparked my fire,” he said. “I found it as a rock-climber on a survey crew, learning the application of mathematics from the side of a cliff. That experience inspired me to return to school and ultimately led to my career as an astronaut.”
As part of the crew of the space shuttle Endeavour in November of 2002, Herrington conducted three spacewalks to help in construction of the International Space Station, logging just under 20 hours of EVAs. He left NASA in 2005 to join Rocketplane Global as a test pilot. He left Rocketplane in December 2007 to pursue other opportunities, which obviously, includes biking.
“Sometimes it takes someone outside of our normal circle of friends and family to shine a light in our direction and help us along,” Herrington continued. “As I set out on this bike ride and try to make the learning practical and fun, I hope to also show students that it takes commitment and effort, both mental and physical, to accomplish your goals.”
Here are the topics Herrington will be focusing on in his educational endeavors:
Caloric intake and heart rate in relation to overall health
Hydration, dehydration, hypothermia
Weather, wind velocity, ground and air speed, relative motion Technology:
Bike composition and weight/ comparison to space shuttle/station
Bike maintenance and repair
Getting power to electronics (i.e. batteries, solar)
Global Positioning System (GPS)
Digital camera technology Engineering:
Velocity and torque
Mass and weight
Friction and measurements Math:
Addition and subtraction
Geometry, trigonometry and physics
Standard in almost every Star Trek episode are warp drives and cloaking devices. But in reality these science fiction gadgets defy the laws of physics. Or do they? Different scientists have been working on developing these two devices and they say they are getting closer to actually creating working prototypes. While warp drive won’t be available anytime soon, scientists are gaining a better understanding of how faster-than-light speed could possibly be achieved. And as for cloaking devices, don’t look now, but researchers recently cloaked three-dimensional objects using specially engineered materials that redirects light around objects.
Previously, scientists at the University of California, Berkley were only able to cloak very thin, two dimensional objects. But now, using meta-materials, which are mixtures of metal and circuit board materials such as ceramic, Teflon or fiber composite, scientists have deflected light waves around an object, like water flowing around a smooth rock in a stream. Objects are visible because they scatter the light that strikes them, reflecting some of it back to the eye. But the meta-materials would ward off light, radar or other waves. In effect, it would be a type of optical camouflage.
The research group, led by Xiang Zhang say they are a step closer to being able to render people and objects invisible. Their findings will be released later this week in the journals Nature and Science.
Another scientist and one of the leaders in cloaking research is John Pendry, a theoretical physicist at Imperial College, London. It was he who first worked out how a cloak could be built in theory, and then he helped build the first working cloak. Pendry recently submitted an abstract that discusses what he says is a new type of cloak, one that gives all cloaked objects the appearance of a flat conducting sheet. Pendry says this type of cloak has the advantage in that nothing remarkable is required to create the cloak. Pendry said the device could be “made isotropic. It makes broadband cloaking in the optical frequencies one step closer.” This type of cloak seemingly creates a mirage to render an object invisible to the eye. Pendry’s own website says information on his new cloak will be available soon.
While cloaking devices would have military applications, a group of scientists researching warp drives say they just want to have the ability to travel to Earth-like exoplanets, like Gliese 581c to better understand the origin and development of life. “The only way we could realistically visit these worlds in time-frames on the order of a human lifespan would be to develop what has been popularly termed a `warp drive,'” said researchers Gerald Cleaver and Richard Obousy from Baylor University in Texas.
Their work expands on research done by theoretical physicist Michael Alcubierre from the University of Mexico, who in 1994 demonstrated space could be made to move around a spacecraft by `stretching’ space so that space itself would expand behind a hypothetical spacecraft, while contracting in front of the craft, creating the effect of motion. So, the ship itself doesn’t move, but space moves around it.
Their new research tries to take advantage of advances in understanding dark energy and why our universe is ever-expanding in every direction. Comprehending that might give us a leg up in being able to generate an asymmetric bubble around a spacecraft. “If we can understand why spacetime is already expanding, we may be able to use this knowledge to artificially generate an expansion (and contraction) of spacetime,” said Cleaver and Obousy in their abstract.
They propose manipulating the 11th dimension, a special theoretical part of an offshoot of string theory called the “m-theory” to create a bubble of dark energy by shrinking the 11th dimension in front of the ship and expanding it behind.
Obviously, this is highly theoretical, but if it leads researchers to a better understanding of dark energy, so much the better.
Thereâ€™s one hitch, however. Cleaver and Obousy calculated that the energy needed to distort the space around a spacecraft-sized object is about 10^45 Joules or the total energy of an object the size of Jupiter if all its mass were converted into energy.
This creates a chicken and the egg type of conundrum. Which comes first: understanding dark energy or having the ability to create huge amounts of energy?
But Cleaver and Obousy are upbeat about it all. “This is a hypothetical propulsion device that could theoretically circumvent the traditional limitations of special relativity which restricts spacecraft to sub-light velocities. Any breakthrough in this field would revolutionize space exploration and open the doorway to interstellar travel.”
The Aurora Borealis or Northern Lights are stunningly beautiful. But they can also disrupt radio communications and GPS signals, and even cause power outages. What’s behind the ethereal Northern Lights that causes them to shimmer and dance with colorful lights while sometimes wreaking havoc with electrical systems here on Earth? Using a fleet of five satellites, NASA researchers have discovered that explosions of magnetic energy a third of the way to the moon power substorms that cause sudden brightenings and rapid movements of the aurora borealis, called the Northern Lights. “We discovered what makes the Northern Lights dance,” said Dr. Vassilis Angelopoulos of the University of California, Los Angeles. Angelopoulos is the principal investigator for the Time History of Events and Macroscale Interactions during Substorms mission, or THEMIS.
The cause of the shimmering in Northern Lights is magnetic reconnection, a common process that occurs throughout the universe when stressed magnetic field lines suddenly snap to a new shape, like a rubber band that’s been stretched too far.
“As they capture and store energy from the solar wind, the Earth’s magnetic field lines stretch far out into space. Magnetic reconnection releases the energy stored within these stretched magnetic field lines, flinging charged particles back toward the Earth’s atmosphere,” said David Sibeck, THEMIS project scientist at NASA’s Goddard Space Flight Center. “They create halos of shimmering aurora circling the northern and southern poles.”
The data was gathered by five strategically positioned Themis satellites, combined with information from 20 ground-based observatories located throughout Canada and Alaska. Launched in February 2007, the five identical satellites line up once every four days along the equator and take observations synchronized with the ground observatories. Each ground station uses a magnetometer and a camera pointed upward to determine where and when an auroral substorm will begin. Instruments measure the auroral light from particles flowing along Earth’s magnetic field and the electrical currents these particles generate.
During each alignment, the satellites capture data that allow scientists to precisely pinpoint where, when, and how substorms measured on the ground develop in space. On Feb. 26, 2008, during one such THEMIS lineup, the satellites observed an isolated substorm begin in space, while the ground-based observatories recorded the intense auroral brightening and space currents over North America.
These observations confirm for the first time that magnetic reconnection triggers the onset of substorms. The discovery supports the reconnection model of substorms, which asserts a substorm starting to occur follows a particular pattern. This pattern consists of a period of reconnection, followed by rapid auroral brightening and rapid expansion of the aurora toward the poles. This culminates in a redistribution of the electrical currents flowing in space around Earth.
Solving the mystery of where, when, and how substorms occur will allow scientists to construct more realistic substorm models and better predict a magnetic storm’s intensity and effects.
All the best sci-fi films have them, and they may become our future automated space explorers. Currently, one of the biggest drawbacks for using robots in space is that they depend on human input (i.e. commands need to be sent for every robotic arm motion and every rover wheel rotation). This means that, especially with missions operating far from Earth (such as the Phoenix Mars Lander and Mars Expedition Rovers), very simple and mundane tasks can take hours or even days to complete. One of the main reasons supporting manned exploration of space is that very complex science can be carried out very rapidly (after all, astronauts are human and many robotic operations that take weeks can be completed in seconds). But say if our robotic explorers had a high degree of automation? Say if they could sever the requirement for human input and carry out tasks with intelligent reasoning? As robotic and computer technology increases in sophistication, one Caltech scientist believes space exploration by artificial intelligence is closer than we think…
I remember watching the start of Star Wars: The Empire Strikes Back thinking it was so unfair that Darth Vader and his ilk had access to intelligent space exploration droids that could fly around the galaxy, land on alien worlds and automatically seek out the rebels on Hoth (directing the battle fleet to the icy moon, creating one of the most famous and atmospheric sci-fi battle sequences in movie history. In my opinion at least). But say if we were able to build such “droids” (in fact, droid is a good description of these space explorers, defined as ‘self-aware robots’) that could be sent out into space to explore and report back to mission control without depending on instruction from Earth?
Wolfgang Fink, physicist and researcher at Caltech, believes robotic exploration of space will always take the lead, and even reverse the need for manned missions. “Robotic exploration probably will always be the trail blazer for human exploration of far space,” he says in an interview with Sharon Gaudin. “We haven’t yet landed a human being on Mars but we have a robot there now. In that sense, it’s much easier to send a robotic explorer. When you can take the human out of the loop, that is becoming very exciting.”
While Fink is encouraged by the progress made by missions such as Phoenix and its robotic arm, he is keen to emphasize that the link between human and robot needs to be removed, thus allowing robots to make their own decisions on what science needs to be carried out. In reference to Phoenix’s robotic arm he said, “The arms are the tools, but it’s about the intent to move the arms. That’s what we’re after. To [have the robot] know that something there is interesting and that’s where it needs to go and then to go get a sample from it. That’s what we’ve after. You want to get rid of the joystick, in other words. You want the system to take control of itself and then basically use its own tools to explore.”
The key attribute robots need to possess is the ability to recognize something of interest, such as a rock or crater, something that a human mind would see as a scientific opportunity. At Caltech, Fink and others are working on programs that use images for robots to distinguish colours, textures, shapes and obstacles. Once artificial intelligence has the ability to do this, if the programming is complex enough, the robot can notice something that is out of place, or a region worth investigating (such as a strangely coloured patch of Mars regolith that a Mars robot will decide to dig into).
As you’d expect, software is being tested and Caltech scientists are beginning to try it out on a rover’s navigation functions. However, the robotic decision-making is very basic presently, but NASA has taken a keen interest in Fink’s work. For example, in 2017 NASA intends to send a robotic mission to Titan, one of Saturn’s moons. In all likelihood the moon will be explored by a balloon-type vehicle. However, it would be impractical for such a vehicle to depend on commands being sent from Earth (as it would take more than an hour for communications to transmit over that distance), so there would need to be a certain degree of automation built into the craft so fast decisions can be made in a dynamic environment such as Titan’s atmosphere.
Although this is all interesting and necessary, there will still be a basic human desire to explore space via manned missions, although a certain degree of self-awareness may be required of our robotic explorers as they carry out reconnaissance trips before we make the trip…