AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Artificial intelligence companies have been urged to replicate the safety calculations that underpin...
Check out our newsletter today and not only will you stay updated with the latest news, insights, an...
Helping customers take control, prevent breaches, and proactively reduce risk Last year, 83% of org...
The new modifications in the healthcare industry are backed by powerful technologies like never befo...