Does the Rise of AI Explain the Great Silence in the Universe?

Artificial Intelligence is making its presence felt in thousands of different ways. It helps scientists make sense of vast troves of data; it helps detect financial fraud; it drives our cars; it feeds us music suggestions; its chatbots drive us crazy. And it’s only getting started.

Are we capable of understanding how quickly AI will continue to develop? And if the answer is no, does that constitute the Great Filter?

The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the “Great Filter.”

The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise. Think climate change, nuclear war, asteroid strikes, supernova explosions, plagues, or any number of other things from the rogue’s gallery of cataclysmic events.

Or how about the rapid development of AI?

A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper’s title is “Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?” The author is Michael Garrett from the Department of Physics and Astronomy at the University of Manchester.

“Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations.”

Michael Garrett, University of Manchester

Some think the Great Filter prevents technological species like ours from becoming multi-planetary. That’s bad because a species is at greater risk of extinction or stagnation with only one home. According to Garrett, a species is in a race against time without a backup planet. “It is proposed that such a filter emerges before these civilizations can develop a stable, multi-planetary existence, suggesting the typical longevity (L) of a technical civilization is less than 200 years,” Garrett writes.

If true, that can explain why we detect no technosignatures or other evidence of ETIs (Extraterrestrial Intelligences.) What does that tell us about our own technological trajectory? If we face a 200-year constraint, and if it’s because of ASI, where does that leave us? Garrett underscores the “…critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multi-planetary society to mitigate against such existential threats.”

An image of our beautiful Earth taken by the Galileo spacecraft in 1990. Do we need a backup home? Credit: NASA/JPL
An image of our beautiful Earth taken by the Galileo spacecraft in 1990. Do we need a backup home? Credit: NASA/JPL

Many scientists and other thinkers say we’re on the cusp of enormous transformation. AI is just beginning to transform how we do things; much of the transformation is behind the scenes. AI seems poised to eliminate jobs for millions, and when paired with robotics, the transformation seems almost unlimited. That’s a fairly obvious concern.

But there are deeper, more systematic concerns. Who writes the algorithms? Will AI discriminate somehow? Almost certainly. Will competing algorithms undermine powerful democratic societies? Will open societies remain open? Will ASI start making decisions for us, and who will be accountable if it does?

This is an expanding tree of branching questions with no clear terminus.

Stephen Hawking (RIP) famously warned that AI could end humanity if it begins to evolve independently. “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans,” he told Wired magazine in 2017. Once AI can outperform humans, it becomes ASI.

Stephen Hawking was a major proponent for colonizing other worlds, mainly to ensure humanity does not go extinct. In later years, Hawking recognized that AI could be an extinction-level threat. Credit: educatinghumanity.com
Stephen Hawking was a major proponent for colonizing other worlds, mainly to ensure humanity does not go extinct. In later years, Hawking recognized that AI could be an extinction-level threat. Credit: educatinghumanity.com

Hawking may be one of the most recognizable voices to issue warnings about AI, but he’s far from the only one. The media is full of discussions and warnings, alongside articles about the work AI does for us. The most alarming warnings say that ASI could go rogue. Some people dismiss that as science fiction, but not Garrett.

“Concerns about Artificial Superintelligence (ASI) eventually going rogue is considered a major issue – combatting this possibility over the next few years is a growing research pursuit for leaders in the field,” Garrett writes.

If AI provided no benefits, the issue would be much easier. But it provides all kinds of benefits, from improved medical imaging and diagnosis to safer transportation systems. The trick for governments is to allow benefits to flourish while limiting damage. “This is especially the case in areas such as national security and defence, where responsible and ethical development should be paramount,” writes Garrett.

News reports like this might seem impossibly naive in a few years or decades.

The problem is that we and our governments are unprepared. There’s never been anything like AI, and no matter how we try to conceptualize it and understand its trajectory, we’re left wanting. And if we’re in this position, so would any other biological species that develops AI. The advent of AI and then ASI could be universal, making it a candidate for the Great Filter.

This is the risk ASI poses in concrete terms: It could no longer need the biological life that created it. “Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics,” Garrett explains.

How could ASI relieve itself of the pesky biological life that corrals it? It could engineer a deadly virus, it could inhibit agricultural food production and distribution, it could force a nuclear power plant to melt down, and it could start wars. We don’t really know because it’s all uncharted territory. Hundreds of years ago, cartographers would draw monsters on the unexplored regions of the world, and that’s kind of what we’re doing now.

This is a portion of the Carta Marina map from the year 1539. It shows monsters lurking in the unknown waters off of Scandinavia. Are the fears of ASI kind of like this? Or could ASI be the Great Filter? Image Credit: By Olaus Magnus - http://www.npm.ac.uk/rsdas/projects/carta_marina/carta_marina_small.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=558827
This is a portion of the Carta Marina map from the year 1539. It shows monsters lurking in the unknown waters off of Scandinavia. Are the fears of ASI kind of like this? Or could ASI be the Great Filter? Image Credit: By Olaus Magnus – http://www.npm.ac.uk/rsdas/projects/carta_marina/carta_marina_small.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=558827

If this all sounds forlorn and unavoidable, Garrett says it’s not.

His analysis so far is based on ASI and humans occupying the same space. But if we can attain multi-planetary status, the outlook changes. “For example, a multi-planetary biological species could take advantage of independent experiences on different planets, diversifying their survival strategies and possibly avoiding the single-point failure that a planetary-bound civilization faces,” Garrett writes.

If we can distribute the risk across multiple planets around multiple stars, we can buffer ourselves against the worst possible outcomes of ASI. “This distributed model of existence increases the resilience of a biological civilization to AI-induced catastrophes by creating redundancy,” he writes.

If one of the planets or outposts that future humans occupy fails to survive the ASI technological singularity, others may survive. And they would learn from it.

Artist's illustration of a SpaceX Starship landing on Mars. If we can become a multi-planetary species, the threat of ASI is diminished. Credit: SpaceX
Artist’s illustration of a SpaceX Starship landing on Mars. If we can become a multi-planetary species, the threat of ASI is diminished. Credit: SpaceX

Multi-planetary status might even do more than just survive ASI. It could help us master it. Garrett imagines situations where we can experiment more thoroughly with AI while keeping it contained. Imagine AI on an isolated asteroid or dwarf planet, doing our bidding without access to the resources required to escape its prison. “It allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation,” Garrett writes.

But here’s the conundrum. AI development is proceeding at an accelerating pace, while our attempts to become multi-planetary aren’t. “The disparity between the rapid advancement of AI and the slower progress in space technology is stark,” Garrett writes.

The difference is that AI is computational and informational, but space travel contains multiple physical obstacles that we don’t yet know how to overcome. Our own biological nature restrains space travel, but no such obstacle restrains AI. “While AI can theoretically improve its own capabilities almost without physical constraints,” Garrett writes, “space travel must contend with energy limitations, material science boundaries, and the harsh realities of the space environment.”

For now, AI operates within the constraints we set. But that may not always be the case. We don’t know when AI might become ASI or even if it can. But we can’t ignore the possibility. That leads to two intertwined conclusions.

If Garrett is correct, humanity must work more diligently on space travel. It can seem far-fetched, but knowledgeable people know it’s true: Earth will not be inhabitable forever. Humanity will perish here by our own hand or nature’s hand if we don’t expand into space. Garrett’s 200-year estimate just puts an exclamation point on it. A renewed emphasis on reaching the Moon and Mars offers some hope.

The Artemis program is a renewed effort to establish a presence on the Moon. After that, we could visit Mars. Are these our first steps to becoming a multi-planetary civilization? Image Credit: NASA
The Artemis program is a renewed effort to establish a presence on the Moon. After that, we could visit Mars. Are these our first steps to becoming a multi-planetary civilization? Image Credit: NASA

The second conclusion concerns legislating and governing AI, a difficult task in a world where psychopaths can gain control of entire nations and are bent on waging war. “While industry stakeholders, policymakers, individual experts, and their governments already warn that regulation is necessary, establishing a regulatory framework that can be globally acceptable is going to be challenging,” Garrett writes. Challenging barely describes it. Humanity’s internecine squabbling makes it all even more unmanageable. Also, no matter how quickly we develop guidelines, ASI might change even more quickly.

“Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations,” Garrett writes.

This is the United Nations General Assembly. Are we united enough to constrain AI? Image Credit: By Patrick Gruban, cropped and downsampled by Pine - originally posted to Flickr as UN General Assembly, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=4806869
This is the United Nations General Assembly. Are we united enough to constrain AI? Image Credit: By Patrick Gruban, cropped and downsampled by Pine – originally posted to Flickr as UN General Assembly, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=4806869

Many of humanity’s hopes and dreams crystallize around the Fermi Paradox and the Great Filter. Are there other civilizations? Are we in the same situation as other ETIs? Will our species leave Earth? Will we navigate the many difficulties that face us? Will we survive?

If we do, it might come down to what can seem boring and workaday: wrangling over legislation.

“The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and technological endeavours,” Garrett writes.