More than 800 prominent figures, including Apple co-founder Steve Wozniak and Virgin Group chairman Richard Branson, have recently signed an open statement calling for an immediate halt to the development of superintelligence.
“Superintelligence” refers to artificial intelligence systems that surpass human cognitive abilities in all fields. This unprecedented joint statement warns of risks ranging from economic collapse to human extinction posed by AI systems exceeding human intelligence. The declaration is seen as a significant blow to the AI model race being heavily promoted by tech giants like OpenAI and Meta, potentially marking a major turning point for the AI industry.
The list of signatories includes some of the world’s most influential tech leaders, scientists, and public figures. Notably, Yoshua Bengio and Geoffrey Hinton, widely considered the godfathers of modern AI, have also lent their names to the call for caution.
The signatories issued a stark warning: “The prospect of superintelligence has raised concerns, with impacts ranging from the obsolescence and dispossession of the human economy, loss of freedom, civil liberties, dignity, and control, to national security risks, and even potentially human extinction.”
This call for a pause comes at a critical juncture. Meta has recently renamed its AI division to “Meta Superintelligence Labs,” while OpenAI and Elon Musk’s xAI are engaged in an increasingly public race to achieve artificial general intelligence. The intervention by industry veterans highlights a growing internal concern about the pace and direction of AI development.
The breadth of support for this joint statement is striking, extending far beyond Silicon Valley. Former National Security Advisor Susan Rice and former Chairman of the Joint Chiefs of Staff Mike Mullen have signed on, underscoring serious national security implications. Even figures with diverse political backgrounds, such as Meghan Markle, alongside Trump allies Steve Bannon and Glenn Beck, have joined, indicating a rare convergence of concerns across different sectors.
Stuart Russell, a leading AI safety researcher at UC Berkeley and a long-time voice warning about superintelligence, helped organize this initiative. His previous research has focused not on whether humans can build superintelligent systems, but on whether they can control them once they exist. This foundational concern is central to the current appeal.
The statement advocates for a comprehensive moratorium on the development of superintelligence. It calls for a complete halt “until society has strong consensus and robust scientific backing that superintelligence can be safely and controlably developed.” This sets an exceptionally high bar that could significantly slow or freeze current AI development timelines.
What sets this joint statement apart from previous warnings is the composition of its signatories. Many are not external critics but rather the very researchers and entrepreneurs who have built today’s AI capabilities. When top scientists like Bengio and Hinton, whose work underpins modern large language models, urge to “hit the brakes,” the industry is likely to pay serious attention. Their position suggests a deep-seated concern stemming from an intimate understanding of the technology’s potential trajectory.
The future course of action may largely depend on how governments respond. The European Union is already implementing AI regulations, and the United States has been exploring safety frameworks. This petition provides lawmakers with a strong impetus to consider implementing more stringent controls on cutting-edge AI research, potentially influencing regulatory policies worldwide.
