AI firms warned to calculate threat of super intelligence or risk it escaping human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems.
Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat.
Related Articles
- Predictable subscription, no hidden data charges - cut your SIEM tax
- Cybersecurity at a crossroads: Adapting to a world where breaches are inevitable
- Human Creativity vs AI Automation: Finding Balance in Web and App Development
- 10 signs your business could be vulnerable to a cyber attack
- Understanding your OT environment: the first step to stronger cyber security


