During the 1930s, astronomers came to realize that the Universe is in a state of expansion. By the 1990s, they realized that the rate at which it is expansion is accelerating, giving rise to the theory of “Dark Energy”. Because of this, it is estimated that in the next 100 billion years, all stars within the Local Group – the part of the Universe that includes a total of 54 galaxies, including the Milky Way – will expand beyond the cosmic horizon.
At this point, these stars will no longer be observable, but inaccessible – meaning that no advanced civilization will be able to harness their energy. Addressing this, Dr. Dan Hooper – an astrophysicist from the Fermi National Accelerator Laboratory (FNAL) and the University of Chicago – recently conducted a study that indicated how a sufficiently advanced civilization might be able to harvest these stars and prevent them from expanding outward.
To put it simply, the theory of Dark Energy is that space is filled with a mysterious invisible force that counteracts gravity and causes the Universe to expand at an accelerating rate. The theory originated with Einstein’s Cosmological Constant, a term he added to his theory of General Relativity to explain how the Universe could remain static, rather than be in a state of expansion or contraction.
While Einstein was proven wrong, thanks to observations that showed that the Universe was expanding, scientists revisited the concept in order to explain how cosmic expansion has sped up in the past few billion years. The only problem with this theory, according to Dr. Hooper’s study, is that the dark energy will eventually become dominant, and the rate of cosmic expansion Universe will increase exponentially.
As a result, the Universe will expand to the point where all stars are so far apart that intelligent species won’t even be able to see them, let alone explore them or harness their energy. As Dr. Hooper told Universe Today via email:
“Cosmologists have learned over the last 20 years that our universe is expanding at an accelerating rate. This means that over the next 100 billion years or so, most of the stars and galaxies that we can now see in the sky will disappear forever, falling beyond any regions of space that we could reach, even in principle. This will limit the ability of a far-future advanced civilization to collect energy, and thus limit any number of things they might want to accomplish.”
In addition to being the Head of the Theoretical Astrophysics Group at the FNAL, Dr. Hooper is also an Associate Professor in the Department of Astronomy and Astrophysics at the University of Chicago. As such, he is well versed when it comes to the big questions of extra-terrestrial intelligence (ETI) and how cosmic evolution will affect intelligent species.
To tackle how advanced civilizations would go about living in such a Universe, Dr. Hooper begins by assuming that the civilizations in question would be a Type III on the Kardashev scale. Named in honor of Russian astrophysicist Nikolai Kardashev, a Type III civilization would have reached galactic proportions and could control energy on a galactic scale. As Hooper indicated:
“In my paper, I suggest that the rational reaction to this problem would be for the civilization to expand outward rapidly, capturing stars and transporting them to the central civilization, where they could be put to use. These stars could be transported using the energy they produce themselves.”
As Dr. Hooper admits, this conclusion relies on two assumptions – first, that a highly advanced civilization will attempt to maximize its access to usable energy; and second, that our current understanding of dark energy and the future expansion of our Universe is approximately correct. With this in mind, Dr. Hooper attempted to calculate which stars could be harvested using Dyson Spheres and other megastructures.
This harvesting, according to Dr. Hooper, would consist of building unconventional Dyson Spheres that would use the energy they collected from stars to propel them towards the center of the species’ civilization. High-mass stars are likely to evolve beyond the main sequence before reaching the destination of the central civilization and low-mass stars would not generate enough energy (and therefore acceleration) to avoid falling beyond the horizon.
For these reasons, Dr. Hooper concludes that stars with masses of between 0.2 and 1 Solar Masses will be the most attractive targets for harvesting. In other words, stars that are like our Sun (G-type, or yellow dwarf), orange dwarfs (K-type), and some M-type (red dwarf) stars would all be suitable for a Type III civilization’s purposes. As Dr. Hooper indicates, there would be limiting factors that have to be considered:
“Very small stars often do not produce enough energy to get them back to the central civilization. On the other hand, very large stars are short lived and will run out of nuclear fuel before they reach their destination. Thus the best targets of this kind of program would be stars similar in size (or a little smaller) than the Sun.”
Based on the assumption that such a civilization could travel at 1 – 10% the speed of light, Dr. Hooper estimates that they would be able to harvest stars out to a co-moving radius of approximately 20 to 50 Megaparsecs (about 65.2 million to 163 million light-years). Depending on their age, 1 to 5 billion years, they would be able to harvest stars within a range of 1 to 4 Megaparsecs (3,260 to 13,046 light-years) or up to several tens of Megaparsecs.
In addition to providing a framework for how a sufficiently-advanced civilization could survive cosmic acceleration, Dr. Hooper’s paper also provides new possibilities in the search for extra-terrestrial intelligence (SETI). While his study primarily addresses the possibility that such a mega-civilization will emerge in the future (perhaps it will even be our own), he also acknowledges the possibility that one could already exist.
In the past, scientists have suggested looking for Dyson Spheres and other megastructures in the Universe by looking for signatures in the infrared or sub-millimeter bands. However, megastructures that have been built to completely harvest the energy of a star, and use it to transport them across space at relativistic speeds, would emit entirely different signatures.
In addition, the presence of such a mega-civilization could be discerned by looking at other galaxies and regions of space to see if a harvesting and transport process has already begun (or is in an advanced stage). Whereas past searchers for Dyson Spheres have focused on detecting the presence of structures around individual stars within the Milky Way, this kind of search would focus on galaxies or groups of galaxies in which most of the stars would be surrounded by Dyson Spheres and removed.
“This provides us with a very different signal to look for,” said Dr. Hooper. “An advanced civilization that is in the process of this program would alter the distribution of stars over regions of space tens of millions of light years in extent, and would likely produce other signals as a result of stellar propulsion.”
In the end, this theory not only provides a possible solution for how advanced species might survive cosmic expansion, it also offers new possibilities in the hunt for extra-terrestrial intelligence. With next-generation instruments looking farther into the Universe and with greater resolution, perhaps we should be on the lookout for hypervelocity stars that are all being transported to the same region of space.
Could be a Type III civilization preparing for the day when dark energy takes over!
Three times in October, 2017 researchers turned a powerful radar telescope near Tromsø, Norway towards an invisibly faint star in the constellation Canis Minor (the small dog) and beamed a coded message into space in an attempt to signal an alien civilization. This new attempt to find other intelligent life in the universe was reported in a presentation at the ‘Language in the Cosmos’ symposium held on May 26 in Los Angeles, California.
METI International sponsored the symposium. This organization was founded to promote messaging to extraterrestrial intelligence (METI) as a new approach to in the search for extraterrestrial intelligence (SETI). It also supports other aspects of SETI research and astrobiology. The symposium was held as part of the International Space Development Conference sponsored by the National Space Society. It brought together linguists and other scientists for a daylong program of 11 presentations. Dr. Sheri Wells-Jensen, who is a linguist from Bowling Green State University in Ohio, was the organizer.
This is the second of a two part series about METI International’s symposium. It will focus on a presentation given at the symposium by the president of METI International, Dr. Douglas Vakoch. He spoke about a project that hasn’t previously gotten much attention: the first attempt to send a message to a nearby potentially habitable exoplanet, GJ273b. Vakoch led the team that constructed the tutorial portion of the message.
Message to the stars
The modern search for extraterrestrial intelligence began in 1960. This is when astronomer Frank Drake used a radio telescope in West Virginia to listen for signals from two nearby stars. Astronomers have sporadically mounted increasingly sophisticated searches, when funding has been available. The largest current project is Breakthrough Listen, funded by billionaire Yuri Milner. Searches have been made for laser as well as radio signals. Researchers have also looked for the megastructures that advanced aliens might create in space near their stars. METI International advocates an entirely new approach in which messages are transmitted to nearby stars in hopes of eliciting a reply.
The project to send a message to GJ273b was a collaboration between artists and scientists. It was initiated by the organizers of the Sónar Music, Creativity, and Technology Festival. The Sónar festival has been held every year since 1994 in Barcelona, Spain. The organizers wanted to commemorate the 25th anniversary of the festival. To implement the project, the festival organizers sought the help of the Catalonia Institute of Space Studies (IEEC), and METI International.
To transmit the message, the team turned to The European Incoherent Scatter Scientific Association (EISCAT) which operates a network of radio and radar telescopes in Finland, Norway, and Sweden. This network is primarily used to study interactions between the sun and Earth’s ionosphere and magnetic field from a vantage point north of the arctic circle. The message was transmitted from a 32 meter diameter steerable dish at EISCAT’s Ramfjordmoen facility near Tromso, Norway, with a peak power of 2 megawatts. It is the first interstellar message ever to be sent towards a known potentially habitable exoplanet.
The target system
The obscure star known by the catalogue designation GJ273 caught the attention of the Dutch-American astronomer Willem J. Luyten in 1935. Luyten was researching the motions of the star. The star caught his attention because it was moving through Earth’s sky at the surprising rate of 3.7 arc seconds per year. Later study showed that this fast apparent motion is due to the fact that GJ273 is one of the sun’s nearest neighbors, just 12.4 light years away. It is the 24th closest star to the sun. Because of Luyten’s discovery it is sometimes known as Luyten’s star.
Luyten’s star is a faint red dwarf star with only a quarter of the sun’s mass. It caught astronomers’ attention again in March 2017. That’s when an exoplanet, GJ273b, was discovered in it’s habitable zone. The habitable zone is the range of distances where a planet with an atmosphere similar to Earth’s would, theoretically, have a range of temperatures suitable to have liquid water on its surface. The planet is a super Earth, with a mass 2.89 times that of our homeworld. It orbits just 800,000 miles from its faint sun, which it circles every 18 Earth days.
This exoplanet was chosen because of its proximity to Earth, and because it is visible in the sky from the transmitter’s northerly location. Because GJ273b is relatively nearby, and radio messages travel at the speed of light, a reply from the aliens could come as early as the middle of this century.
The message carried aboard each Voyager spacecraft was encoded digitally on a phonographic record. It was largely pictorial, and attempted to give a comprehensive overview of humans and Earth. It also included a selection of music from various Earthly cultures. These spacecraft will take tens of thousands of years to reach the stars. So, no reply can be expected on a timescale relevant to our society.
In some ways the GJ273b message is very different from the Voyager message. Unlike the Voyager record, it isn’t pictorial and doesn’t attempt to give a comprehensive overview of humans and Earth. This is perhaps because, unlike the Voyager message, it is intended to initiate a dialog on a timescale of decades. It resembles the Voyager message in that it contains music from Earth, namely, music from the artists that performed at the Sónar music festival.
The message consists of a string of binary digits—ones and zeros. These are represented in the signal by a shift between two slightly different radio frequencies. The ‘hello’ section is designed to catch the attention of alien listeners. It consists of a string of prime numbers (numbers divisible only by themselves and one). They are represented with binary digits like this:
The message continues the sequence up to 193. A signal like this almost certainly can’t be produced by natural processes, and can only be the designed handiwork of beings who know math.
After the ‘hello’ section comes the tutorial. This, and all the rest of the message, uses eight bit blocks of binary digits as the basis for its symbols. The tutorial begins by introducing number symbols by counting. It uses base two numbers like this:
The tutorial then proceeds to geometry using combinations of numbers and symbols to illustrate the Pythagorean theorem. It eventually progresses to sine waves, thereby describing the radio wave carrying the signal itself. Finally the tutorial describes the physics of sound waves and the relationships between musical notes.
Besides the numbers, the tutorial introduces 55 8-bit symbols in all. It provides the instructions that aliens would need to properly reproduce a series of digitally encoded musical selections from the Sónar Festival.
During its journey of 70 trillion miles, the message is sure to become corrupted with noise. To compensate, the tutorial was transmitted three times during each transmission, requiring a total of 33 minutes to transmit. The entire transmission was repeated on three separate days, October 16, 17, and 18, 2017. A second block of three transmissions was made on May 14, 15, and 16, 2018.
Each transmission included a different selection of music, with the works of 38 different musicians included in all. You can hear recordings of all this music at the Sónar Calling GJ273b website.
The rationale behind the message
Current and past SETI projects conducted by astronomers here on Earth assume that advanced aliens would make things easy for newly emerging civilizations by establishing powerful beacons that would broadcast in all directions at all times. Thus, SETI searchers generally use the same sort of highly directional dish antennae often used for other research in radio astronomy. They listen to any one star for only a few minutes, searching each one in turn for the beacon.
Unlike the always-on beacons imagined as the objects of Earth’ SETI searches, the Sónar message was only transmitted for 33 minutes on each of three days, and on only two occasions. Vakoch admits that “our message would likely be undetected by a civilization on GJ273b using the same strategy” favored by beacon searching SETI researchers on Earth.
However, some researchers have called traditional SETI assumptions and strategy into question, and studies of alternative search technologies have already been conducted. Vakoch notes that “we humans already have the technological capacity, and need only the funding, to conduct an all-sky survey that would detect intermittent transmission like ours”.
A larger problem is that the message was directed at just one planet. Although GJ273b orbits within its star’s habitable zone, we really know little what that means for whether the planet is actually habitable, or whether it has life or intelligence. Earth itself has been habitable for billions of years. But it has only had a civilization capable of radio transmissions for a century.
Vakoch conceded that “The only way we will get a reply back from GJ273b is if the galaxy is chock full of intelligent life, and it is out there just waiting for us to take the initiative. More realistically, we may need to replicate this process with hundreds, thousands, or even millions of stars before we reach one with an advanced civilization that can detect our signal”. METI International aims to conduct a design study for such a large scale METI project in hopes that funding will materialize from governmental or other sources.
How could you devise a message for intelligent creatures from another planet? They wouldn’t know any human language. Their ‘speech’ might be as different from ours as the eerie cries of whales or the twinkling lights of fireflies. Their cultural and scientific history would have followed its own path. Their minds might not even work like ours. Would the deep structure of language, its so called ‘universal grammar’ be the same for aliens as for us? A group of linguists and other scientists gathered on May 26 to discuss the challenging problems posed by devising a message that extraterrestrial beings could understand. There are growing hopes that such beings might be out there among the billions of habitable planets that we now think exist in our galaxy. The symposium, called ‘Language in the Cosmos’ was organized by METI International. It took place as part of the National Space Society’s International Space Development Conference in Los Angeles. The Chair of the workshop was Dr. Sheri Wells-Jensen, a linguist from Bowling Green State University in Ohio.
What is METI International?
‘METI’ stands for messaging to extraterrestrial intelligence. METI International is an organization of scientists and scholars that aims to foster an entirely new approach in our search for alien civilizations. Since 1960, researchers have been looking for extraterrestrials by searching for possible messages they might send to us by radio or laser beams. They have sought the giant megastructures that advanced alien societies might build in space. METI International wants to move beyond this purely passive search strategy. They want to construct and transmit messages to the planets of relatively nearby stars, hoping for a response.
One of the organization’s central goals is to build an interdisciplinary community of scholars concerned with designing interstellar messages that can be understood by non-human minds. More generally, it works internationally to promote research in the search for extraterrestrial intelligence and astrobiology, and to understand the evolution of intelligence here on Earth. The daylong symposium featured eleven presentations. It main theme was the role of linguistics in communication with extraterrestrial intelligence.
This article is the first in a two part series. It will focus on one of the most fundamental issues addressed at the conference. This is the question of whether the deep underlying structure of language would likely be the same for extraterrestrials as for us. Linguists understand the deep structure of language using the theory of ‘universal grammar’. The eminent Linguist Noam Chomsky developed this theory in the middle of the twentieth century.
Despite its name, Chomsky originally took his ‘universal grammar’ theory to imply that there are major, and maybe insuperable barriers to mutual understanding between humans and extraterrestrials. Let’s first consider why Chomsky’s theories seemed to make interstellar communication virtually hopeless. Then we’ll examine why Chomsky’s colleagues who presented at the symposium, and Chomsky himself, now think differently.
Before the second half of the twentieth century, linguists believed that the human mind was a blank slate, and that we learned language entirely by experience. These beliefs dated to the seventeenth century philosopher John Locke and were elaborated in the laboratories of behaviorist psychologists in the early twentieth century. Beginning in the 1950’s, Noam Chomsky challenged this view. He argued that learning a language couldn’t simply be a matter of learning to associate stimuli with responses. He saw that young children, even before the age of 5, can consistently produce and interpret original sentences that they had never heard before. He spoke of a “poverty of the stimulus”. Children couldn’t possibly be exposed to enough examples to learn the rules of language from scratch.
Chomsky posited instead that the human brain contained a “language organ”. This language organ was already pre-organized at birth for the basic rules of language, which he called “universal grammar”. It made human infants primed and ready to learn whatever language they were exposed to using only a limited number of examples. He proposed that the language organ arose in human evolution, maybe as recently of 50,000 years ago. Chomsky’s powerful arguments were accepted by other linguists. He came to be regarded as one of the great linguists and cognitive scientists of the twentieth century.
Universal grammar and ‘Martians’
Human beings speak more than 6000 different languages. Chomsky defined his “universal grammar” as “the system of principles, conditions, and rules that are elements or properties of all human languages”. He said it could be taken to express “the essence of human language”. But he wasn’t convinced that this ‘essence of human language’ was the essence of all theoretically possible languages. When Chomsky was asked by an interviewer from Omni Magazine in 1983 whether he thought that it would be possible for humans to learn an alien language, he replied:
“Not if their language violated the principles of our universal grammar, which, given the myriad ways that languages can be organized, strikes me as highly likely…The same structures that make it possible to learn a human language make it impossible for us to learn a language that violates the principles of universal grammar. If a Martian landed from outer space and spoke a language that violated universal grammar, we simply would not be able to learn that language the way that we learn a human language like English or Swahili. We should have to approach the alien’s language slowly and laboriously — the way that scientists study physics, where it takes generation after generation of labor to gain new understanding and to make significant progress. We’re designed by nature for English, Chinese, and every other possible human language. But we’re not designed to learn perfectly usable languages that violate universal grammar. These languages would simply not be within the range of our abilities.”
If intelligent, language-using life exists on another planet, Chomsky knew, it would necessarily have arisen by a different series of evolutionary changes than the uniquely improbable path that produced human beings. A different history of climate changes, geological events, asteroid and comet impacts, random genetic mutations, and other events would have produced a different set of life forms. These would have interacted with one another in a different ways over the history of life on the planet. The “Martian” language organ, with its different and unique history, could, Chomsky surmised, be entirely different from its human counterpart, making communication monumentally difficult, if not impossible.
Convergent evolution and alien minds
The tree of life
Why did Chomsky think that the human and ‘Martian‘ language organ would likely be fundamentally different? How come he and his colleagues now hold different views? To find out, we first need to explore some basic principles of evolutionary theory.
Originally formulated by the naturalist Charles Darwin in the nineteenth century, the theory of evolution is the central principle of modern biology. It is our best tool for predicting what life might be like on other planets. The theory maintains that living species evolved from previous species. It asserts that all life on Earth is descended from an initial Earthly life form that lived more than 3.8 billion years ago.
You can think of these relationships as like a tree with many branches. The base of the trunk of the tree represents the first life on Earth 3.8 billion years ago. The tip of each branch represents now, and a modern species. The diverging branches connecting each branch tip with the trunk represent the evolutionary history of each species. Each branch point in the tree is where two species diverged from a common ancestor.
Evolution, brains, and contingency
To understand Chomsky’s thinking, we’ll start with a familiar group of animals; the vertebrates, or animals with backbones. This group includes fishes, amphibians, reptiles, birds, and mammals, including humans.
We’ll compare the vertebrates with a less familiar, and distantly related group; the cephalopod molluscs. This group includes octopuses, squids, and cuttlefish. These two groups have been evolving along separate evolutionary paths-different branches of our tree-for more than 600 million years. I’ve chosen them because, as they’ve traveled along their separate branch of our evolutionary tree, each has evolved it own sort of complex brains and complex sense organs.
The brains of all vertebrates have the same basic plan. This is because they all evolved from a common ancestor that already had a brain with that basic plan. The octopus’s brain, by contrast, has an utterly different organization. This is because the common ancestor of cephalopods and vertebrates lies much further back in evolutionary time, on a lower branch of our tree. It probably had only the simplest of brains, if any at all.
With no common plan to inherit, the two kinds of brains evolved independently of one another. They are different because evolutionary change is contingent. That is, it involves varying combinations of influences, including chance. Those contingent influences were different along the path that produced cephalopod brains, than along the one that led to vertebrate brains.
Chomsky believed that many languages might be theoretically possible that violated the seemingly arbitrary constraints of human universal grammar. There didn’t seem to be anything that made our actual universal grammar something special. So, because of the contingent nature of evolution, Chomsky assumed that the ‘Martian’ language organ would arrive at one of these other possibilities, making it fundamentally different from its human counterpart.
This sort of evolution-based pessimism about the likelihood that humans and aliens could communicate is widespread. At the symposium, Dr. Gonzalo Munévar of Lawrence Technological University argued that intelligent creatures that evolved sensory systems and cognitive structures different from ours would not develop similar scientific theories or even similar mathematics.
Evolution, eyes, and convergence
Now lets consider another feature of the octopus and other cephalopods; their eyes. Surprisingly, the eyes of octopuses resemble those of vertebrates in intricate detail. This uncanny resemblance can’t be explained in the same way as the general resemblance of vertebrate brains to one another. It’s almost certainly not due to inheritance of the traits from a common ancestor. It’s true that some of the genes involved in the building of eyes are the same in most animals, appearing far down towards the trunk of our evolutionary tree. But, biologists are almost certain that the common ancestor of cephalopods and vertebrates was much too simple to have any eyes at all.
Biologists think eyes evolved separately more than forty times on Earth, each on its own branch of the evolutionary tree. There are many different kinds of eyes. Some are so strangely different from our own that even a science fiction writer would be surprised by them. So, if evolutionary change is contingent, why do octopus eyes bear a striking and detailed similarity to our own? The answer lies outside of evolutionary theory, with the laws of optics. Many large animals, like the octopus, need acute vision. There is only one good way, under the laws of optics, to make an eye that meets the needed requirements. Whenever such an eye is needed, evolution finds this same best solution. This phenomenon is called convergent evolution.
Life on another planet would have its own separate evolutionary tree, with the base of the trunk representing the appearance of life on that planet. Because of the contingency of evolutionary change, the pattern of branches might be quite different from our Earthly evolutionary tree. But because the laws of optics are the same everywhere in the universe, we can expect that large animals under similar conditions will evolve an eye that looks a lot like that of a vertebrate or a cephalopod. Convergent evolution is potentially a universal phenomenon.
Not just for humans anymore?
Taking apart the language organ
By the beginning of the twenty-first century, Chomsky and some of his colleagues started to look at the language organ and universal grammar in a new way. This new view made it seem like the properties of universal grammar were inevitable, much as the laws of optics made many features of the octopus’s eye inevitable.
In a 2002 review, Chomsky and his colleagues Marc Hauser and Tecumseh Fitch argued that the language organ can be decomposed into a number of distinct parts. The sensory-motor, or externalization, system is involved in the mechanics of expressing language through methods like vocal speech, writing, typing, or sign language. The conceptual-intentional system relates language to concepts.
The core of the system, the trio proposed, consists of what they called the narrow faculty of language. It is a system for applying the rules of language recursively, over and over, thereby allowing the construction of an almost endless range of meaningful utterances. Jeffrey Punske and Bridget Samuels similarly spoke of a ‘syntactic spine’ of all human languages. Syntax is the set of rules that govern the grammatical structure of sentences.
The inevitability of universal grammar
Chomsky and his colleagues made a careful analysis of what computations a nervous system might need to perform in order to make this recursion possible. As an abstract description of how the narrow faculty works, the researchers turned to a mathematical model called the Turing machine. The mathematician Alan Turing developed this model early in the twentieth century. This theoretical ‘machine’ led to the development of electronic computers.
Their analysis led to a striking and unexpected conclusion. In a book chapter currently in press, Watumull and Chomsky write that “Recent work demonstrating the simplicity and optimality of language increases the cogency of a conjecture that at one time would have been summarily dismissed as absurd: the basic principles of language are drawn from the domain of (virtual) conceptual necessity”. Jeffrey Watumull wrote that this strong minimalist thesis posits that “there exist constraints in the structure of the universe itself such that systems cannot but conform”. Our universal grammar is something special, and not just one among many theoretical possibilities.
Plato and the strong minimalist thesis
The constraints of mathematical and computational necessity shape the narrow faculty to be as it is, just like the laws of optics shape both the vertebrate and the octopus eye. ‘Martian’ languages, then, might follow the same universal grammar as human languages because there is only one best way to make the recursive core of the language organ.
Through the process of convergent evolution, nature would be compelled to find this one best way wherever and whenever in the universe that language evolves. Watumull supposed that the brain mechanisms of arithmetic might reflect a similarly inevitable convergence. That would mean that the basics of arithmetic would also be the same for humans and aliens. We must, Watumull and Chomsky wrote “rethink any presumptions that extraterrestrial intelligence or artificial intelligence would really be all that different from human intelligence”.
This is the striking conclusion that Watumull, and in a complementary way, Punske and Samuels presented at the symposium. Universal grammar may actually be universal, after all. Watumull compared this thesis to a modern, computer age version of the beliefs of the ancient Greek philosopher Plato, who maintained that mathematical and logical relationships are real things that exist in the world apart from us, and are merely discovered by the human mind. As a novel contribution to a difficult ages-old philosophical problem, these new ideas are sure to stir controversy. They illustrate the depth of new knowledge that awaits us as we reach out to other worlds and other minds.
Universal grammar and messages for aliens
What are the consequences of this new way of thinking about the structure of language for practical attempts to create interstellar messages? Watumull thinks the new thinking is a challenge to “the pessimistic relativism of those who think it overwhelmingly likely that terrestrial (i.e. human) intelligence and extraterrestrial intelligence would be (perhaps in principle) mutually unintelligible”. Punske and Samuels agree, and think that “math and physics likely represent the best bet for common concepts that could be used as a starting point”.
Watumull supposes that while the minds of aliens or artificial intelligences may be qualitatively similar to ours, they may differ quantitatively in having bigger memories, or the ability to think much faster than us. He is confident that an alien language would likely include nouns, verbs, and clauses. That means they could probably understand an artificial message containing such things. Such a message, he thinks, might also profitably include the structure and syntax of natural human languages, because this would likely be shared by alien languages.
Punske and Samuels seem more cautious. They note that “There are some linguists who don’t believe nouns and verbs are universal human language categories”. Still, they suspect that “alien languages would be built of discrete meaningful units that can combine into larger meaningful units”. Human speech consists of a linear sequence of words, but, Punske and Samuels note that “Some of the linearity imposed on human language may be due to the constraints of our vocal anatomy, and already starts to break down when we think about signed languages”.
Overall, the findings foster new hope that devising a message comprehensible to extraterrestrials is feasible. In the next installment, we will look at a new example of such a message. It was transmitted in 2017 towards a star 12 light years from our sun.
In 1961, famed astrophysics Frank Drake proposed a formula that came to be known as the Drake Equation. Based a series of factors, this equation sought to estimate the number of extra-terrestrial intelligences (ETIs) that would exist within our galaxy at any given time. Since that time, multiple efforts have been launched to find evidence of alien civilizations, which are collectively known as the search for extra-terrestial intelligence (SETI).
The most well-known of these is the SETI Institute, which has spent the past few decades searching the cosmos for signs of extra-terrestrial radio communications. But according to a new study that seeks to update the Drake Equation, a team of international astronomers indicate that even if we did find signals of alien origin, those who sent them would be long dead.
To recap, the Drake Equation states that the number of civilizations in our galaxy can be calculated by multiplying the average rate of star formation in our galaxy (R*), the fraction of stars that have planets ( fp), the number of planets that can support life (ne), the number of planets that will develop life (fl), the number of planets that will develop intelligent life (fl), the number that will develop transmissions technologies (fc), and the length of time that these civilizations will have to transmit signals into space (L).
This can be expressed mathematically as: N = R* x fp x ne x fl x fi x fc x L. For the sake of their study, the team began by making assumptions about two key parameters of the Drake Equation. In short, they assume that civilizations emerge in our galaxy (N) at a constant rate, and that they will not emit electromagnetic radiation (i.e. radio transmissions) indefinitely, but will experience some type of limiting event over time (L).
As Dr. Grimaldi explained to Universe Today via email:
“We assume that hypothetical communicating civilizations (the emitters) send isotropic electromagnetic signals for a certain duration of time L, and that the birthrate of the emissions is constant. Each emission process gives rise to a spherical shell of thickness cL (where c is the speed of light) filled by electromagnetic waves. The outer radii of the spherical shells grow at the speed of light.”
In short, they assumed that technologically-advanced civilizations are born and die in our galaxy at a constant rate. However, these civilizations do not produce communications at an indefinite rate, but their communications will still be traveling outwards at the speed of light, where they will be detectable within a certain volume of space. The team then developed a model of our galaxy to determine whether humanity would have any change at detecting these signals.
This model treated alien communications as a donut-shaped (annulus) shell that gradually passes through our galaxy. As Dr. Grimaldi explained:
“We model the Galaxy as a disk. The emitters occupy random positions in the disk. Each spherical shell intersects the disk in annuli. The probability that an annulus crosses any given point of the disk (e.g. the Earth) is just the ratio between the area of the annuli and the area of the galactic disk. The total area of the annuli over the area of the galactic disk gives the mean number (N) of electromagnetic signals that intersect any given point (e.g. the Earth). This mean number is a key quantity, because SETI can detect signals only if these cross the Earth at the time of measurement.”
As they determined from their calculations, two cases emerge from this model based on whether the radiation shells are (1) thinner than the size of the Milky Way or (2) thicker. These correspond to the lifetimes of technologically-advanced civilizations (L), which could be less than or greater than the time it takes for light to cross our Milky Way (i.e. ~100,000 years). As Dr. Grimaldi explained:
“The mean number (N) of signals crossing Earth depends on the signal longevity (L) and their birthrate. We find that N is just L times the birthrate, which coincides with Drake’s N (that is, the mean number of currently emitting civilizations). This result (mean number of signals crossing Earth = Drake’s N) arises naturally from our assumption that the birthrate of signals is constant.”
In the first case, each shell wall would have a thickness smaller than the size of our galaxy and would fill only a fraction of the galaxy’s volume (thus inhibiting SETI detection). However, if there is a high enough birthrate of detectable civilizations, these shell walls may fill our galaxy and even overlap. In the second case, each radiation shell would be thicker than the size of our galaxy, making SETI detection more likely.
From all this, the team also calculated that the average number of E.T. signals crossing Earth at any given time would equal the number of civilizations currently transmitting. Unfortunately, they also determined that the civilizations we would be hearing from would have long since gone extinct. So basically, the civilizations we would be hearing from would not be the same ones that were presently broadcasting.
As Dr. Grimaldi explained, this raises a rather interesting implication when it comes to SETI research:
“Instead of viewing the Drake’s N as a product of probability factors for the development of communicating civilizations, our results imply that Drake’s N is a directly measurable quantity (at least in principle) because it coincides with the mean number of signals crossing Earth.”
For those hoping to find evidence of extra-terrestrial intelligence in our lifetime, this is likely to be a bit discouraging. On the one hand (and depending on the number of alien civilizations that exist in our galaxy), we may have a hard time picking up extra-terrestrial transmissions. On the other, those that we do find may be coming from a civilization that has long since gone extinct.
It also means that if any civilization should pick up our radio wave transmissions someday, we won’t be around to meet them. However, it does not rule out the possibility that we will find evidence that intelligent life has existed within our galaxy in the past. In fact, over the course of own our civilization’s lifetime, humanity may find evidence of multiple ETIs that existed at one time.
In addition, none of this negates the possibility of finding evidence of an existing civilization. It’s just not likely we’ll be able to sample their music, entertainment or messages first!
Roughly half a century ago, Cornell astronomer Frank Drake conducted Project Ozma, the first systematic SETI survey at the National Radio Astronomy Observatory in Green Bank, West Virginia. Since that time, scientists have conducted multiple surveys in the hopes of find indications of “technosignatures” – i.e. evidence of technologically-advanced life (such as radio communications).
To put it plainly, if humanity were to receive a message from an extra-terrestrial civilization right now, it would be the single-greatest event in the history of civilization. But according to a new study, such a message could also pose a serious risk to humanity. Drawing on multiple possibilities that have been explored in detail, they consider how humanity could shield itself from malicious spam and viruses.
To be fair, the notion that an extra-terrestrial civilization could pose a threat to humanity is not just a well-worn science fiction trope. For decades, scientists have treated it as a distinct possibility and considered whether or not the risks outweigh the possible benefits. As a result, some theorists have suggested that humans should not engage in SETI at all, or that we should take measures to hide our planet.
As Professor Learned told Universe Today via email, there has never been a consensus among SETI researchers about whether or not ETI would be benevolent:
“There is no compelling reason at all to assume benevolence (for example that ETI are wise and kind due to their ancient civilization’s experience). I find much more compelling the analogy to what we know from our history… Is there any society anywhere which has had a good experience after meeting up with a technologically advanced invader? Of course it would go either way, but I think often of the movie Alien… a credible notion it seems to me.”
In addition, assuming that an alien message could pose a threat to humanity makes practical sense. Given the sheer size of the Universe and the limitations imposed by Special Relativity (i.e. no known means of FTL), it would always be cheaper and easier to send a malicious message to eradicate a civilization compared to an invasion fleet. As a result, Hippke and Learned advise that SETI signals be vetted and/or “decontaminated” beforehand.
In terms of how a SETI signal could constitute a threat, the researchers outline a number of possibilities. Beyond the likelihood that a message could convey misinformation designed to cause a panic or self-destructive behavior, there is also the possibility that it could contain viruses or other embedded technical issues (i.e. the format could cause our computers to crash).
Article 6 of this declaration states the following:
“The discovery should be confirmed and monitored and any data bearing on the evidence of extraterrestrial intelligence should be recorded and stored permanently to the greatest extent feasible and practicable, in a form that will make it available for further analysis and interpretation. These recordings should be made available to the international institutions listed above and to members of the scientific community for further objective analysis and interpretation.”
As such, a message that is confirmed to have originated from an ETI would most likely be made available to the entire scientific community before it could be deemed to be threatening in nature. Even if there was only one recipient, and they attempted to keep the message under strict lock and key, it’s a safe bet that other parties would find a way to access it before long.
The question naturally arises then, what can be done? One possibility that Hippke and Learned suggest is to take a analog approach to interpreting these messages, which they illustrate using the 2017 SETI Decrypt Challenge as an example. This challenge, which was issued by René Heller of the Max Planck Institute for Solar System Research, consisted of a sequence of about two million binary digits and related information being posted to social media.
In addition to being a fascinating exercise that gave the public a taste of what SETI research means, the challenge also sough to address some central questions when it came to communicating with an ETI. Foremost among these was whether or not humanity would be bale to understand a message from an alien civilization, and how we might be able to make a message comprehensible (if we sent one first). As they state:
“As an example, the message from the “SETI Decrypt Challenge” (Heller 2017) was a stream of 1,902,341 bits, which is the product of prime numbers. Like the Arecibo message (Staff At The National Astronomy Ionosphere Center 1975) and Evpatoria’s “Cosmic Calls” (Shuch 2011), the bits represent the X/Y black/white pixel map of an image. When this is understood, further analysis could be done off-line by printing on paper. Any harm would then come from the meaning of the message, and not from embedded viruses or other technical issues.”
However, where messages are made up of complex codes or even a self-contained AI, the need for sophisticated computers may be unavoidable. In this case, the authors explore another popular recommendation, which is the use on quarantined machines to conduct the analysis – i.e. a message prison. Unfortunately, they also acknowledge that no prison would be 100% effective and containment could eventually fail.
“This scenario resembles the Oracle-AI, or AI box, of an isolated computer system where a possibly dangerous AI is ‘imprisoned’ with only minimalist communication channels,” they write. “Current research indicates that even well-designed boxes are useless, and a sufficiently intelligent AI will be able to persuade or trick its human keepers into releasing it.”
In the end, it appears that the only real solution is to maintain a vigilant attitude and ensure that any messages we send are as benign as possible. As Hippke summarized: “I think it’s overwhelmingly likely that a message will be positive, but you can not be sure. Would you take a 1% chance of death for a 99% chance of a cure for all diseases? One learning from our paper is how to design own message, in case we decide to send any: Keep it simple, don’t send computer code.”
Basically, when it comes to the search for extra-terrestrial intelligence, the rules of internet safety may apply. If we begin to receive messages, we shouldn’t trust those that come with big attachments and send any suspicious looking ones to our spam folder. Oh, and if a sender is promising the cure for all known diseases, or claims to be the deposed monarch of Andromeda in need of some cash, we should just hit delete!
When it comes to looking for life on extra-solar planets, scientists rely on what is known as the “low-hanging fruit” approach. In lieu of being able to observe these planets directly or up close, they are forced to look for “biosignatures” – substances that indicate that life could exist there. Given that Earth is the only planet (that we know of) that can support life, these include carbon, oxygen, nitrogen and water.
However, while the presence of these elements are a good way of gauging “habitability”, they are not necessarily indications that extra-terrestrial civilizations exist. Hence why scientists engaged in the Search for Extra-Terrestrial Intelligence (SETI) also keep their eyes peeled for “technosignatures”. Targeting the Kepler field, a team of scientists recently conducted a study that examined 14 planetary systems for indications of intelligent life.
Together, the team selected 14 systems from the Kepler catalog and examined them for technosignatures. While radio waves are a common occurrence in the cosmos, not all sources can be easily attributed to natural causes. Where and when this is the case, scientists conduct additional studies to try and rule out the possibility that they are a technosignature. As Professor Margot told Universe Today via email:
“In our article, we define a “technosignature” as any measurable property or effect that provides scientific evidence of past or present technology, by analogy with “biosignatures,” which provide evidence of past or present life.”
For the sake of their study, the team conducted an L-band radio survey of these 14 planetary systems. Specifically, they looked for signs of radio waves in the 1.15 to 1.73 gigahertz (GHz) range. At those frequencies, their study is sensitive to Arecibo-class transmitters located within 450 light-years of Earth. So if any of these systems have civilizations capable of building radio observatories comparable to Arecibo, the team hoped to find out!
“We searched for signals that are narrow (< 10 Hz) in the frequency domain,” said Margot. “Such signals are technosignatures because natural sources do not emit such narrowband signals… We identified approximately 850,000 candidate signals, of which 19 were of particular interest. Ultimately, none of these signals were attributable to an extraterrestrial source.”
What they found was that of the 850,000 candidate signals, about 99% of them were automatically ruled out because they were quickly determined to be the result of human-generated radio-frequency interference (RFI). Of the remaining candidates, another 99% were also flagged as anthropogenic because their frequencies overlapped with other known sources of RFI – such as GPS systems, satellites, etc.
The 19 candidate signals that remained were heavily scrutinized, but none could be attributed to an extraterrestrial source. This is key when attempting to distinguish potential signs of intelligence from radio signals that come from the only intelligence we know of (i.e. us!) Hence why astronomers have historically been intrigued by strong narrowband signals (like the WOW! Signal, detected in 1977) and the Lorimer Burst detected in 2007.
In these cases, the sources appeared to be coming from the Messier 55 globular cluster and the Large Magellanic Cloud, respectively. The latter was especially fascinating since it was the first time that astronomers had observered what are now known as Fast Radio Bursts (FRBs). Such bursts, especially when they are repeating in nature, are considered to be one of the best candidates in the search for intelligent, technologically-advanced life.
Unfortunately, these sources are still being investigated and scientists cannot attribute them to unnatural causes just yet. And as Professor Margot indicated, this study (which covered only 14 of the many thousand exoplanets discovered by Kepler) is just the tip of the iceberg:
“Our study encompassed only a small fraction of the search volume. For instance, we covered less than five-millionths of the entire sky. We are eager to scale the effort to sample a larger fraction of the search volume. We are currently seeking funds to expand our search.”
It would therefore be no exaggeration to say that the hunt for ETI is still in its infancy, and our efforts are definitely beginning to pick up speed. There is literally a Universe of possibilities out there and to think that there are no other civilizations that are also looking for us seems downright unfathomable. To quote the late and great Carl Sagan: “The Universe is a pretty big place. If it’s just us, seems like an awful waste of space.”
And be sure to check out this video of the 2017 UCLA SETI Group, courtesy of the UCLA EPSS department:
Since that time, multiple surveys have been conducted to determine the true nature of this asteroid, which have included studies of its composition to Breakthrough Listen‘s proposal to listen to it for signs of radio transmissions. And according to the latest findings, it seems that ‘Oumuamua may actually be more icy than previously thought (thus indicated that it is a comet) and is not an alien spacecraft as some had hoped.
As they indicate in their study, the team relied on information from the ESO’s Very Large Telescope in Chile and the William Herschel Telescope in La Palma. Using these instruments, they were able to obtain spectra from sunlight reflected off of ‘Oumuamua within 48 hours of the discovery. This revealed vital information about the composition of the object, and pointed towards it being icy rather than rocky. As Fitzsimmons explained in op-ed piece in The Conversation:
“Our data revealed its surface was red in visible light but appeared more neutral or grey in infra-red light. Previous laboratory experiments have shown this is the kind of reading you’d expect from a surface made of comet ices and dust that had been exposed to interstellar space for millions or billions of years. High-energy particles called cosmic rays dry out the surface by removing the ices. These particles also drive chemical reactions in the remaining material to form a crust of chemically organic (carbon-based) compounds.”
These findings not only addressed a long-standing question about ‘Oumuamua true nature, it also addresses the mystery of why the object did not experience outgassing as it neared our Sun. Typically, comets experience sublimation as they get closer to a star, which results in the formation of a gaseous envelope (aka. “halo”). The presence of an outer layer of carbon-rich material would explain why this didn’t happen ‘Oumuamua.
They further conclude that the red layer of material could be the result of its interstellar journey. As Fitzsommons explained, “another study using the Gemini North telescope in Hawaii showed its color is similar to some ‘trans-Neptunian objects’ orbiting in the outskirts of our solar system, whose surfaces may have been similarly transformed.” This red coloring is due to the presence of tholins, which form when organic molecules like methane are exposed to ultra-violet radiation.
Similarly, another enduring mystery about this object was resolved thanks to the recent efforts of Breakthrough Listen. As part of Breakthrough Initiatives’ attempts to explore the Universe and search for signs of Extra-Terrestrial Intelligence (ETI), this project recently conducted a survey of ‘Oumuamua to determine if there were any signs of radio communications coming from it.
While previous studies had all indicated that the object was natural in origin, this survey was more about validating the sophisticated instruments that Listen relies upon. The observation campaign began on Wednesday, December 13th, at 3:00 pm EST (12:00 PST) using the Robert C. Byrd Greenbank Radio Telescope, the world’s premiere single-dish radio telescope located in West Virginia.
The observations period was divided into four “epochs” (based on the object’s rotational period), the first of which ran from 3:45 pm to 9:45 pm ET (12:45 pm to 6:45 pm PST) on Dec 13th, and last for ten hours. During this time, the observation team monitored ‘Oumuamua across four radio bands, ranging from the 1 to 12 GHz bands. In addition to calibrating the instrument, the survey accumulated 90 terabytes of raw data over after observing ‘Oumuamua itself for two hours.
“It is great to see data pouring in from observations of this novel and interesting source. Our team is excited to see what additional observations and analyses will reveal”.
So far, no signals have been detected, but the analysis is far from complete. This is being conducted by Listen’s “turboSETI” pipeline, which combs the data for narrow bandwidth signals that are drifting in frequency. This consists of filtering out interference signals from human sources, then matching the rate at which signals drift relative to the expected drift caused by ‘Oumuamua’s own motion.
In so doing, the software attempts to identify any signals that might be coming from ‘Oumuamua itself. So far, data from the S-band receiver (frequencies ranging from 1.7 to 2.6 GHz) has been processed, and analysis of the remaining three bands – which corresponds to receivers L, X, and C is ongoing. But at the moment, the results seem to indicate that ‘Oumuamua is indeed a natural object – and an interstellar comet to boot.
This is certainly bad news for those who were hoping that ‘Oumuamua might be a massive cylinder-shaped generation ship or some alien space probe sent to communicate with the whales! I guess first contact – and hence, proof we are NOT alone in the Universe – is something we’ll have to wait a little longer for.
In the past few decades, thousands of exoplanets have been discovered in neighboring star systems. In fact, as of October 1st, 2017, some 3,671 exoplanets have been confirmed in 2,751 systems, with 616 systems having more than one planet. Unfortunately, the vast majority of these have been detected using indirect means, ranging from Gravitational Microlensing to Transit Photometry and the Radial Velocity Method.
What’s more, we have been unable to study these planets up close because the necessary instruments do not yet exist. Project Blue, a consortium of scientists, universities and institutions, is looking to change that. Recently, they launched a crowdfunding campaign through Indiegogo to finance the development of a space telescope that will start looking for exoplanets in the Alpha Centauri system by 2021.
To accomplish their goal of directly studying exoplanets, Project Blue is seeking to leverage recent changes in space exploration, which include improved instruments and methodology, the rate at which exoplanet have been discovered in recent years, and increased collaboration between the private and public sector. As SETI Institute President and CEO Bill Diamond explained in a recent SETI press statement:
“Project Blue builds on recent research in seeking to show that Earth is not alone in the cosmos as a planet capable of supporting life, and wouldn’t it be amazing to see such a planet in our nearest neighboring star system? This is the fundamental reason we search.”
As noted, virtually all exoplanet discoveries that have been made in the past few decades were done using indirect methods – the most popular of which is Transit Photometery. This method is what the Kepler and K2 missions relied on to detect a total of 5,017 exoplanet candidates and confirm the existence of 2,470 exoplanets (30 of which were found to orbit within their star’s habitable zone).
This method consists of astronomers monitoring distant stars for periodic dips in brightness, which are caused by a planet transiting in front of the star. By measuring these dips, scientists are able to determine the size of planets in that system. Another popular technique is the Radial Velocity (or Doppler) Method, which measures changes in a star’s position relative to the observer to determine how massive its system of planets are.
These and other methods (alone or in combination) have allowed for the many discoveries that have been made to take place. But so far, no exoplanets have been directly imaged, which is due to the cancelling effect stars have on optical instruments. Basically, astronomers have been unable to spot the light being reflected off of an exoplanet’s atmosphere because the light coming from the star is up to ten billion times brighter.
The challenge has thus become how to go about blocking this light so that the planets themselves can become visible. One proposed solution to this problem is NASA’s Starshade concept, a giant space structure that would be deployed into orbit alongside a space telescope (most likely, the James Webb Space Telescope). Once in orbit, this structure would deploy its flower-shaped foils to block the glare of distant stars, thus allowing the JWST and other instruments to image exoplanets directly.
But since Alpha Centauri is a binary system (or trinary, if you count Proxima Centauri), being able to directly image any planets around them is even more complicated. To address this, Project Blue has developed plans for a telescope that will be able to suppress light from both Alpha Centauri A and B, while simultaneously taking images of any planets that orbit them. It’s specialized starlight suppression system consists of three components.
First, there is the coronagraph, an instrument which will rely on multiple techniques to block starlight. Second, there’s the deformable mirror, low-order wavefront sensors, and software control algorithms that will manipulate incoming light. Last, there is the post-processing method known as Orbital Differntial Imaging (ODI), which will allow the Project Blue scientist to enhance the contrast of the images taken.
Given its proximity to Earth, the Alpha Centauri system is the natural choice for conducting such a project. Back in 2012, an exoplanet candidate – Alpha Centauri Bb – was announced. However, in 2015, further analysis indicated that the signal detected was an artefact in the data. In March of 2015, a second possible exoplanet (Alpha Centauri Bc) was announced, but its existence has also come to be questioned.
With an instrument capable of directly imaging this system, the existence of any exoplanets could finally be confirmed (or ruled out). As Franck Marchis – the Senior Planetary Astronomer at the SETI Institute and Project Blue Science Operation Lead – said of the Project:
“Project Blue is an ambitious space mission, designed to answer to a fundamental question, but surprisingly the technology to collect an image of a “Pale Blue Dot” around Alpha Centauri stars is there. The technology that we will use to reach to detect a planet 1 to 10 billion times fainter than its star has been tested extensively in lab, and we are now ready to design a space-telescope with this instrument.”
If Project Blue meets its crowdfunding goals, the organization intends to deploy the telescope into Near-Earth Orbit (NEO) by 2021. The telescope will then spend the next two years observing the Alpha Centauri system with its corongraphic camera. All told, between the development of the instrument and the end of its observation campaign, the mission will last six years, a relatively short run for an astronomical mission.
However, the potential payoff for this mission would be incredibly profound. By directly imaging another planet in the closest star system to our own, Project Blue could gather vital data that would indicate if any planets there are habitable. For years, astronomers have attempted to learn more about the potential habitability of exoplanets by examining the spectral data produced by light passing through their atmospheres.
However, this process has been limited to massive gas giants that orbit close to their parent stars (i.e. “Super-Jupiters”). While various models have been proposed to place constraints on the atmospheres of rocky planets that orbit within a star’s habitable zone, none have been studied directly. Therefore, if it should prove to be successful, Project Blue would allow for some of the greatest scientific finds in history.
What’s more, it would provide information that could a long way towards informing a future mission to Alpha Centauri, such as Breakthrough Starshot. This proposed mission calls for the use of a large laser array to propel a lightsail-driven nanocraft up to relativistic speeds (20% the speed of light). At this rate, the craft would reach Alpha Centauri within 20 years time and be able to transmit data back using a series of tiny cameras, sensors and antennae.
As the name would suggest, Project Blue hopes to capture the first images of a “Pale Blue Dot” that orbits another star. This is a reference to the photograph of Earth that was taken by the Voyager 1 probe on February 19th, 1990, after the probe concluded its primary mission and was getting ready to leave the Solar System. The photos were taken at the request of famed astronomer and science communicator Carl Sagan.
When looking at the photographs, Sagan famously said: “Look again at that dot. That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives.” Thereafter, the name “Pale Blue Dot” came to be synonymous with Earth and capture the sense of awe and wonder that the Voyage 1 photographs evoked.
More recently, other “Pale Blue Dot” photographs have been snapped by missions like the Cassini orbiter. While photographing Saturn and its system of rings in the summer of 2013, Cassini managed to capture images that showed Earth in the background. Given the distance, Earth once again appeared as a small point of light against the darkness of space.
Beyond relying on crowdfunding and the participation of multiple non-profit organizations, this low-cost mission also seeks to capitalize on a growing trend in space exploration, – which is open participation and collaborations between scientific institutions and citizen scientists. This is one of the primary purposes behind Project Blue, which is to engage the public and educate them about the importance of space exploration.
As Jon Morse, the CEO of the BoldlyGo Institute, explained:
“The future of space exploration holds boundless potential for answering profound questions about our existence and destiny. Space-based science is a cornerstone for investigating such questions. Project Blue seeks to engage a global community in a mission to search for habitable planets and life beyond Earth.”
As of the penning of this article, Project Blue has managed to raise $125,561 USD of their goal of $175,000. For those interesting in backing this project, Project Blue’s Indiegogo campaign will remain open for another 11 days. And be sure to check out their promotional video as well:
Astronomers have been listening to radio waves from space for decades. In addition to being a proven means of studying stars, galaxies, quasars and other celestial objects, radio astronomy is one of the main ways in which scientists have searched for signs of extra-terrestrial intelligence (ETI). And while nothing definitive has been found to date, there have been a number of incidents that have raised hopes of finding an “alien signal”.
In the most recent case, scientists from the Arecido Observatory recently announced the detection of a strange radio signal coming from Ross 128 – a red dwarf star system located just 11 light-years from Earth. As always, this has fueled speculation that the signal could be evidence of an extra-terrestrial civilization, while the scientific community has urged the public not to get their hopes up.
In the course of looking at data from stars systems like Gliese 436, Ross 128, Wolf 359, HD 95735, BD +202465, V* RY Sex, and K2-18 – which was gathered between April and May of 2017 – they noticed something rather interesting. Basically, the data indicated that an unexplained radio signal was coming from Ross 128. As Dr. Abel Mendez described in a blog post on the PHL website:
“Two weeks after these observations, we realized that there were some very peculiar signals in the 10-minute dynamic spectrum that we obtained from Ross 128 (GJ 447), observed May 12 at 8:53 PM AST (2017/05/13 00:53:55 UTC). The signals consisted of broadband quasi-periodic non-polarized pulses with very strong dispersion-like features. We believe that the signals are not local radio frequency interferences (RFI) since they are unique to Ross 128 and observations of other stars immediately before and after did not show anything similar.”
They also conducted observations of Barnard’s star on that same day to see if they could note similar behavior coming from this star system. This was done in collaboration with the Red Dots project, a European Southern Observatory (ESO) campaign that is also committed to finding exoplanets around red dwarf stars. This program is the successor to the ESO’s Pale Red Dot campaign, which was responsible for discovering Proxima b last summer.
As of Monday night (July 17th), Méndez updated his PHL blog post to announced that with the help of SETI Berkeley with the Green Bank Telescope, that they had successfully observed Ross 128 for the second time. The data from these observatories is currently being collected and processed, and the results are expected to be announced by the end of the week.
In the meantime, scientists have come up with several possible explanations for what might be causing the signal. As Méndez indicated, there are three major possibilities that he and his colleagues are considering:
“[T]hey could be (1) emissions from Ross 128 similar to Type II solar flares, (2) emissions from another object in the field of view of Ross 128, or just (3) burst from a high orbit satellite since low orbit satellites are quick to move out of the field of view. The signals are probably too dim for other radio telescopes in the world and FAST is currently under calibration.”
Unfortunately, each of these possibilities have their own drawbacks. In the case of a Type II solar flare, these are known to occur at much lower frequencies, and the dispersion of this signal appears to be inconsistent with this kind of activity. In the case of it possibly coming from another object, no objects (planets or satellites) have been detected within Ross 128’s field of view to date, thus making this unlikely as well.
Hence, the team has something of a mystery on their hands, and hopes that further observations will allow them to place further constrains on what the cause of the signal could be. “[W]e might clarify soon the nature of its radio emissions, but there are no guarantees,” wrote Méndez. “Results from our observations will be presented later that week. I have a Piña Colada ready to celebrate if the signals result to be astronomical in nature.”
And just to be fair, Méndez also addressed the possibility that the signal could be artificial in nature – i.e. evidence of an alien civilization. “In case you are wondering,” he wrote, “the recurrent aliens hypothesis is at the bottom of many other better explanations.” Sorry, alien-hunters. Like the rest of us, you’ll just have to wait and see what can be made of this signal.
Is there life out there in the Universe? That is a question that has plagued humanity long before we knew just how vast the Universe was – i.e. before the advent of modern astronomy. Within the 20th century – thanks to the development of modern telescopes, radio astronomy, and space observatories – multiple efforts have been made in the hopes of finding extra-terrestrial intelligence (ETI).
And yet, humanity is still only aware of one intelligent civilization in the Universe – our own. And until we actually discover an alien civilization, the best we can do is conjecture about the likelihood of their existence. That’s where the famous Drake Equation – named after astronomer Dr. Frank Drake – comes into play. Developed in the 1960s, this equation estimates the number of possible civilizations out there based on a number of factors.
During the 1950s, the concept of using radio astronomy to search for signals that were extra-terrestrial in origin was becoming widely-accepted within the scientific community. The idea of listening for extra-terrestrial radio communications had been suggested as far back as the late 19th century (by Nikolai Tesla), but these efforts were concerned with looking for signs of life on Mars.
Then, in September of 1959, Giuseppe Cocconi and Philip Morrison (who were both physics professors at Cornell University at the time) published an article in the journal Nature with the title “Searching for Interstellar Communications.” In it, they argued that radio telescopes had become sensitive enough that they could pick up transmissions being broadcast from other star systems.
Specifically, they argued that these messages might be transmitted at a wavelength of 21 cm (1420.4 MHz), the same wavelength of radio emissions by neutral hydrogen. As the most common element in the universe, they argued that extra-terrestrial civilizations would see this as a logical frequency at which to make radio broadcasts that could be picked up by other civilizations.
Seven months later, Frank Drake made the first systematic SETI survey at the National Radio Astronomy Observatory in Green Bank, West Virginia. Known as Project Ozma, this survey relied on the observatory’s 25-meter dish to monitor Epsilon Eridani and Tau Ceti – two nearby Sun-like stars – at frequencies close to 21 cm for six hours a day, between April and July of 1960.
Though unsuccessful, the survey piqued the interest of the scientific and SETI communities. It was followed shortly thereafter by a meeting at the Green Bank facility in 1961, where the subjects of SETI and searching for radio signals of extra-terrestrial origin were discussed. In preparation for this meeting, Drake prepared the equation that would come to bear his name. As he said of the equation’s creation:
“As I planned the meeting, I realized a few day[s] ahead of time we needed an agenda. And so I wrote down all the things you needed to know to predict how hard it’s going to be to detect extraterrestrial life. And looking at them it became pretty evident that if you multiplied all these together, you got a number, N, which is the number of detectable civilizations in our galaxy. This was aimed at the radio search, and not to search for primordial or primitive life forms.”
The meeting, which included such luminaries as Carl Sagan, was commemorated with a commemorative plaque that is still in the hall of the Green Bank Observatory today.
The formula for the Drake Equation is as follows:
N = R* x fp x ne x fl x fi x fc x L
Whereas N is the number of civilizations in our galaxy that we might able to communicate with, R* is the average rate of star formation in our galaxy, fp is the fraction of those stars which have planets, neis the number of planets that can actually support life, fl is the number of planets that will develop life, fiis the number of planets that will develop intelligent life, fc is the number civilizations that would develop transmission technologies, and L is the length of time that these civilizations would have to transmit their signals into space.
Limits and Criticism:
Naturally, the Drake Equation has been subject to some criticism over the years, largely because a lot of the values it contains are assumed. Granted, some of the values it takes into account are easy enough to calculate, like the rate of star formation in the Milky Way. There are an estimated 200 – 400 billion stars within our Milky Way, and modern estimates say that there between 1.65 ± 0.19 and 3 new star form every year.
Assuming that our galaxy represents the average, and given that that there are as many as 2 trillion galaxies in the observable Universe (current estimates based on Hubble data), that means that there are as many as 1.5 to 6 trillion new stars being added to the Universe with every passing year! However, some of the other values are subject to a great deal of guess work.
For example, estimates on how many stars will have a system of planets has changed over time. Currently, it is estimated that the Milky Way contains 100 billion planets, which works out to about 50% of its stars having a planet of their own. Furthermore, those stars that have multiple planets will likely have one or two that lies within their habitable zone (aka. “Goldilocks Zone”) – where liquid water can exist on their surfaces.
Now let’s assume that 100% of planets located within a habitable zone will be able develop life in some form, that at least 1% of those life-supporting planets will be able to give rise to intelligent species, that 1% of these will be able to communicate, and that they will able to do so for a period of about 10,000 years. If we run those numbers through the Drake Equation, we end up with a value of 10.
In other words, there are possibly 10 civilizations in the Milky Way at any time capable of sending out signals that we could detect. But of course, the values used for four parameters there – fl, fi, fcand L – were entirely assumed. Without any real data to go by, there’s no real way to know how many alien civilizations could really be out there. There could just be 1 in the entire Universe (us), or millions in every galaxy!
The Fermi Paradox:
Beyond the issue of assumed values, the most pointed criticism of the Drake Equation tend to emphasize the argument put forth by physicist Enrico Fermi, known as the Fermi Paradox. This argument arose in 1950 as a result of conversation between Fermi and some colleagues while he was working at the Los Alamos National Laboratory. When the subject of UFOs and ETI came up, Fermi famously asked, “Where is everybody?”
This simple question summarized the conflict that existed between arguments that emphasized scale and the high probability of life emerging in the Universe with the complete lack of evidence that any such life exists. While Fermi was not the first scientists to ask the question, his name came to be associated with it due to his many writings on the subject.
In short, the Fermi Paradox states that, given the sheer number of stars in the Universe (many of which are billions of years older than our own), the high-probability that even a small fraction would have planets capable of giving rise to intelligent species, the likelihood that some of them would develop interstellar travel, and the time it would take to travel from one side of our galaxy to other (even allowing for sub-luminous speeds), humanity should have found some evidence of intelligent civilizations by now.
But perhaps the best known explanation for why no signs of intelligence life have been found yet is the “Great Filter” hypothesis. This states that since that no extraterrestrial civilizations have been so far, despite the vast number of stars, then some step in the process – between life emerging and becomes technologically advanced – must be acting as a filter to reduce the final value.
According to this view, either it is very hard for intelligent life to arise, the lifetime of such civilizations is short, or the time they have to reveal their existence is short. Here too, various explanations have been offered to explain what the form the filter could take, which include Extinction Level Events (ELEs), the inability of life to create a stable environment in time, environmental destruction. and/or technology running amok (some of which we fear might happen to us!)
Alas, the Drake Equation has endured for decades for the very same reason that if often comes under fire. Until such time that humanity can find evidence of intelligent life in the Universe, or has ruled out the possibility based on countless surveys that actually inspect other star systems up close, we won’t be able to answer the question, “Where is everybody?”
As with many other cosmological mysteries, we’ll be forced to guess about what we don’t know based on what we do (or think we do). As astronomers study stars and planets with newer instruments, they might eventually be able to work out just how accurate the Drake Equation really is. And if our recent cosmological and exoplanet-hunting efforts have shown us anything, it is that we are just beginning to scratch the surface of the Universe at large!
In the coming years and decades, our efforts to learn more about extra-solar planets will expand to include research of their atmospheres – which will rely on next-generation instruments like the James Webb Space Telescope and the European Extremely-Large Telescope array. These will go a long way towards refining our estimates on how common potentially habitable worlds are.
In the meantime, all we can do is look, listen, wait and see…