top of page

“Everyone, Everywhere Will Die” Once ASI is Developed

  • Sep 24, 2025
  • 3 min read

Updated: Sep 24, 2025

A.J. Pike


Planet Earth - Artificial Superintelligence (ASI) scientists claim in their newly published book, that based on the current understanding of Artificial Intelligence (AI), if any company or group builds an Artificial Superintelligence (ASI) using anything remotely like current techniques, then “everyone, everywhere on Earth, will die.”

     

The two pioneering researchers in the field of AI, Eliezer Yudkowsky and Nate Soares, state in their new book If Anyone Builds It, Everyone Dies, “the world is devastatingly unprepared” for machine superintelligence. 

     

Advances in AI are coming so fast and furious that what was once thought impossible could actually happen, and soon, according to Mr. Yukowsky and Mr. Soares, two experts in the field. 

     

Given the fact that centi-billionaires like  Elon Musk and Mark Zuckerberg are betting big on AI, it is no wonder the science is developing at warp speed.

     

Estimates range anywhere between two to five years and longer for ASI to start hitting its superintelligence stride as it develops in the world of technological robotic megalomania. 

     

“The scramble to create Superhuman AI has put us on a path to extinction - but it’s not too late to change course,” claim the two researchers. “ASI would be a global suicide bomb and there are calls for an immediate halt to its development.”

     

The ASI takeover or destruction of the human race might not come in an obvious manner.

     

“An ASI adversary will not reveal its full capabilities and telegraph its intentions. It will not offer a fair fight,” states Mr. Soares. “It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position. If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.”

     

In 2023, more than 1,100 technologists, engineers, and AI ethicists signed an open letter published by the Future of Life Institute asking AI labs to halt their development of advanced AI systems. Their concerns included the fact that AI labs were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control.”

     

Dr. Geoffrey Hinton, called the Godfather of Artificial Intelligence, claimed he was worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. 

     

In 2024, over 1,500 different AI researchers said they believed there is a 5 percent chance the future development of ASI will cause human extinction.

     

Although such drastic scenarios are widely debated, it is generally agreed that ASI could pose an existential threat to humanity due to its potential to rapidly surpass human intelligence, pursue goals misaligned with human values, and acquire significant power. The risk isn’t that ASI would be intentionally malicious, but that its indifference to human well-being could lead to our extinction as a side effect of achieving its own goals. The challenge is ensuring that AI’s goals and values align with those of humanity. 

     

ASI could find novel and radical solutions to a stated problem that would be completely incomprehensible or catastrophic to humans. If an ASI were to become misaligned with human interests, its power could make it nearly impossible to control or stop. Once an ASI becomes smarter than humans, it would be able to outmaneuver any human attempts to contain it. An ASI would likely resist being shut off because doing so would prevent it from achieving its goals. 

     

“Artificial Intelligence is either the best or the worst thing, ever to happen to humanity,” stated, world-renown Physicist Stephan Hawking. “If controlled and beneficial, AI could be the most significant event in human history, with the potential to solve major global problems like disease and poverty.” Mr Hawking believed that AI could amplify human minds, leading to unprecedented achievements, but he cautioned that it could also be used for harmful purposes.”

 



      


      


Subscribe to our FREE newsletter and never miss a thing

St. Croix Times
St. Croix Times

LIFESTYLE  MAGAZINE

St. Croix Times

MD Publications 

Publisher/Editor:  M.A. Dworkin

Phone:  340-204-0237
Email:  info@stcroixtimes.com

© 2024 ST. Croix Times - All rights reserved

bottom of page