Over 800 Tech Leaders Call for Halt to “Superintelligence” Research Amid Fears of Human Extinction

More than 800 prominent figures, including Apple co-founder Steve Wozniak and Virgin Group chairman Richard Branson, have recently signed an open statement calling for an immediate halt to the development of superintelligence.

“Superintelligence” refers to artificial intelligence systems that surpass human cognitive abilities in all fields. This unprecedented joint statement warns of risks ranging from economic collapse to human extinction posed by AI systems exceeding human intelligence. The declaration is seen as a significant blow to the AI model race being heavily promoted by tech giants like OpenAI and Meta, potentially marking a major turning point for the AI industry.

The list of signatories includes some of the world’s most influential tech leaders, scientists, and public figures. Notably, Yoshua Bengio and Geoffrey Hinton, widely considered the godfathers of modern AI, have also lent their names to the call for caution.

Over 800 Tech Leaders Jointly Call for a Halt to Superintelligence Development, Citing Risk of Human Extinction

The signatories issued a stark warning: “The prospect of superintelligence has raised concerns, with impacts ranging from the obsolescence and dispossession of the human economy, loss of freedom, civil liberties, dignity, and control, to national security risks, and even potentially human extinction.”

This call for a pause comes at a critical juncture. Meta has recently renamed its AI division to “Meta Superintelligence Labs,” while OpenAI and Elon Musk’s xAI are engaged in an increasingly public race to achieve artificial general intelligence. The intervention by industry veterans highlights a growing internal concern about the pace and direction of AI development.

The breadth of support for this joint statement is striking, extending far beyond Silicon Valley. Former National Security Advisor Susan Rice and former Chairman of the Joint Chiefs of Staff Mike Mullen have signed on, underscoring serious national security implications. Even figures with diverse political backgrounds, such as Meghan Markle, alongside Trump allies Steve Bannon and Glenn Beck, have joined, indicating a rare convergence of concerns across different sectors.

Stuart Russell, a leading AI safety researcher at UC Berkeley and a long-time voice warning about superintelligence, helped organize this initiative. His previous research has focused not on whether humans can build superintelligent systems, but on whether they can control them once they exist. This foundational concern is central to the current appeal.

The statement advocates for a comprehensive moratorium on the development of superintelligence. It calls for a complete halt “until society has strong consensus and robust scientific backing that superintelligence can be safely and controlably developed.” This sets an exceptionally high bar that could significantly slow or freeze current AI development timelines.

What sets this joint statement apart from previous warnings is the composition of its signatories. Many are not external critics but rather the very researchers and entrepreneurs who have built today’s AI capabilities. When top scientists like Bengio and Hinton, whose work underpins modern large language models, urge to “hit the brakes,” the industry is likely to pay serious attention. Their position suggests a deep-seated concern stemming from an intimate understanding of the technology’s potential trajectory.

The future course of action may largely depend on how governments respond. The European Union is already implementing AI regulations, and the United States has been exploring safety frameworks. This petition provides lawmakers with a strong impetus to consider implementing more stringent controls on cutting-edge AI research, potentially influencing regulatory policies worldwide.

Over 800 Tech Leaders Jointly Call for a Halt to Superintelligence Development, Citing Risk of Human Extinction

免责声明:本网站内容主要来自原创、合作伙伴供稿和第三方自媒体作者投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。任何单位或个人认为本网站中的网页或链接内容可能涉嫌侵犯其知识产权或存在不实内容时,可联系本站进行审核删除。
(0)
上一篇 2025年 10月 23日 上午1:55
下一篇 2025年 10月 23日 上午3:28

相关推荐

欢迎来到AI快讯网,开启AI资讯新时代!