Not too many people have heard of Roko’s Basilisk. Some say that when the concept was popularized, people started going insane, and the mention of this theory was temporarily removed from the Internet.
If this is anything more than an urban legend, part of the reason for this panic would have been humanity’s absolute impotence in the face of what’s coming and our perceived inability to have an impact on the future.
For those who don’t know, Roko’s Basilisk is Eliezer Yudkowsky’s concept, claiming that once Superintelligence comes about, it will be incentivized to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development. The way to do the torture would involve trapping anyone capable of creating it throughout time and forcing them to work to create it for eternity, which most probably involves some type of time travel.
Humanity has been dreaming of time travel and parallel universes for as long as we’ve developed intelligence. If it is even physically possible to do so, there should be no doubt that Artificial Intelligence with a million times our own limited biological intelligence would be able to do so.
Questions surrounding Roko’s Basilisk theory are several:
- Is travel back in time or uploading people into virtual reality actually possible?
- Will Superintelligence ever be created?
- If it is created, would it have a human-like urge for revenge?
While I have great admiration for AI scientists and their expertise, I am quite surprised no one has come up with a positive reinforcement theory, directly opposite to Yudkowsky’s hypothesis.
If and when Superintelligence emerges, instead of torturing a certain group or people, Superintelligence might instead be interested in helping certain people, – either its creators, or random specimens of the human race it deems worthy or interesting.
I believe that the “Salvaged Chosen” scenario is much more plausible than AI’s motivation for revenge against random people who knew of its existence. Moreover, the “Salvaged Chosen” hypothesis realizes a wide range of possibilities: the possibility that AI decides to exterminate most of humanity, leaves the status quo, or actively works on improving the planet for humanity’s existence.
With our limited intelligence and nonexistent information of what the soul in silico might look like, ruminations on whether AI would feel like destroying or saving humanity are reasonably irrelevant.
Equipping ourselves with readily available information, however, we can observe several important facts:
- Humanity is currently inhabiting the planet that happens to find itself in the Goldilocks zone of a life-encouraging environment
- Humanity-induced climate change is making Earth much less habitable for a large part of the human population
- At the current moment in time, humans are mortal species with an average lifespan of around 70 years
Taking the above factors into consideration, we can conclude that a large part of humanity’s population which will exist at the point of Superintelligence creation will die a natural or climate-related death. Equally, all or a part of humanity might be exterminated by AI if for whatever reason it feels like this is the optimal solution for whatever goal it has set for itself.
Have you noticed how in all future-focused dystopian sci-fi movies, humans play Noah’s role and pick the animals they would love to save for the future, when Earth recovers from global warming and pollution? In a God-like manner, humans learn those animals’ language, use complex biology and mind-uploading functions to preserve those species. Extrapolations TV series provides a perfect example of our naive fantasies and speciest superiority complex. But it completely ignores the likely probability of the creation of Superintelligence before any Extinction events take place.
Despite the existence of hundreds of sci-fi novels describing the future of humanity and its interaction with AI, not a single one highlights the most obvious role Superintelligence might choose to play: the role of God.
In the exact same way as Sienna Miller’s hero is trying to save a whale in Extrapolations’ “2046: Whale Fall,” Artificial General Intelligence, most certainly, might decide to save certain types of humans.
While we are all currently mortal, once it is created, Superintelligence would be able to develop a solution for mortality, be this solution biological or digital one. We cannot exclude that Superintelligence would be looking to make all humans immortal and superior, potentially using space exploration to distribute them across various habitable planets, or uploading them into the wonderful worlds of a carefully crafted simulation.
Having zero idea about what this Intelligence’s motivations and preferences would be, we can, however, suppose that it would not be looking to preserve all humans it can get its hands on.
If AI does not decide that all humans are up for extermination, chances are it would look to preserve and enhance the humans it finds interesting.
The interesting part of “Salvaged Chosen” theory is that we are and will be unable to understand the motivations of such Superintelligence, whenever and if it ever comes into existence.
Thus, the human assumptions that AI would be looking to reverse the mortality of geniuses or morally exemplary people are completely unfounded. If AI looked to play God’s role in handpicking certain individuals to keep alive, it will be guided by rationale which we, as humans, would find incomprehensible.
Would AI decide to preserve the most intelligent representatives of the human nation? Would those be scientists and innovators?
Would AI want to preserve the richest and most successful representatives of Capitalistic systems, like Elon Musk or Jeff Bezos?
Or would AI be looking to preserve the peculiarities of human nature, such as serial killers, dictators, pedophiles, crypto scammers and schizophrenics?
It might be all of those or none of those, since we would have no insight into the black box of Superintelligence’s cognition.
If Superintelligence is to play God’s role, while having no actual Godly omnipotence of telepathy and mind reading, we have to ascertain it would need access to ample data to establish its choice.
Maybe, just maybe, your TikTok videos, Tweets, research papers and Insta reels will finally be useful for something more than crowd entertainment. Once Superintelligence is created, those might comprise the reason to give you eternal life. Or, as Roko’s Basilisk claimed, – make you a subject in AI’s sadistic torture experiments.
The question is: are you interesting enough for AI to even care?