Apocalypse Now
A cyberethnographer responds to AI doomsday predictions
We’re obsessed with the apocalypse, and we have been for a long time. Since 66 CE a millenarian obsession has lurked within mankind. Take a look at this list from Wikipedia of all the times the apocalypse was predicted. I invite you to consider how many times it has actually happened. I’m writing this, aren’t I? We’re still here, aren’t we? And thus, when people speak of an AI apocalypse, I ask you: what is different this time?
Apocalyptic predictions are poor tools for determining when the world will end. They are great for revealing the beliefs of the Cassandras proffering the claims. With AI, the claim is: we are on the verge of creating an intelligence so powerful it will eclipse, outmaneuver, or eradicate us.
Here is the predicted path.
Soiled state: The human condition is flawed, with imperfect bodies and average intelligence.
Prophecy: Technology, specifically AI, will free us from our mortal bodies and usher an era of superintelligence.
Tribulation: People will lose jobs and society will be upended by the arrival of self-driven or agentic superintelligent AI.
Existential risk: Misaligned superintelligence could derail humanity.
Divine or supernatural intervention: The AI researcher is the singular individual capable of solving the alignment problem, and preventing humanity’s demise.
Utopia: An aligned superintelligence could engender an age of abundance, multiplanetary life, and digital immortality.
If you grew up doing catechesis or reading the Torah perhaps you recognize the shape of this story. It bears a striking resemblance to the Judeo-Christian eschatology (fancy word for study of the end of the world). In the holy texts, a fallen humanity’s redemption is prophesied. A messiah returns during a time of tribulation, before a final judgement when the chosen few are brought to Heaven.
In our modern prophecy, the divine is replaced with the technological. It is technology that brings about the demise, but it is also technology which can prevent it, (if handled by the right corporation). The makers of the new so-called intelligences never seem to be the ones suffering from the predicted turmoil, it is always the other, the working man, the populace, the public. The average citizen loses their job. The average citizen gets killed by an autonomous drone. The privileged hide out in unmapped bunkers far far away.
The idea of a deadly autonomous superintelligence is curious upon further thought. It is anchored on the belief that an independent entity, provided with sufficient power and intellect, would ineluctably resort to the mass extermination of humans. These predictions reveal the predictors’ association of intelligence with mass violence, deception and control. But historically, only humans have been responsible for the mass extinction of other species. Perhaps we should not be so quick to project our own vices onto an alien intelligence. Then again, perhaps we are creating these systems in our image. In which case the question becomes: Whose image is it?
Ruby Justice is a cyberethnographer and professor of design and media theory at NYU







