AI firms warned to calculate threat of super intelligence or risk it escaping human control

Posted: 12th May 2025

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.

Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.

View Full Article

Related Articles

Popular Articles

Simplify your cybersecurity with SenseOn. Enjoy transparent, predictable pricing per endpoint w...
If you're relying on traditional network infrastructure, it's probably quietly working against you. ...
Tired of sassy service providers charging through the roof for complicated and limited data manageme...
The future of IT isn’t just cloud, it’s intelligent, agentic, and built for speed. With ...