AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Artificial Intelligence has transitioned from the future of technology into an actual key player in ...
2026 is racing ahead – time to pioneer cyber clarity and confidence. Advantage Defense Pulse&...
The 2026 SASE & Zero Trust Survey, conducted by Open Systems in collaboration with Cybersec...
Cyber incidents targeting organisations – particularly those against critical national i...