Business leaders and scientists call for AI pause


The non-profit Future of Life Institute focuses on the fundamental threats to human existence. Now it has published an open letter calling for an AI pause. The letter is signed by several prominent figures in business and science.

For at least six months, all AI labs should stop developing AI systems more powerful than GPT-4, the letter says, referring to ongoing projects as “giant AI experiments.”

The letter is signed by business leaders such as Elon Musk, Steve Wozniak, and Stable-AI (Stable Diffusion) co-founder Emad Mostaque, as well as many notable AI researchers such as Turing Prize winner Yoshua Bengio, Berkeley AI professor Stuart Russell, and language model critic Gary Marcus.

Three researchers from Google’s AI sister company Deepmind and a Google developer also signed the letter. So far, no one from OpenAI is among them. In total, the open letter currently has 1125 signatures.


Open Letter to OpenAI and Microsoft

The letter is addressed to “all AI labs” to stop working on more powerful AI systems than GPT-4. But the undertone is primarily aimed at the aggressive deployment of large language models by Microsoft and OpenAI. In a matter of weeks, both companies have brought AI technology to many millions of people and integrated large language models into numerous standard applications such as Search or Office.

The Institute cites a lack of planning and management in the current spread of AI technology. The AI ​​race is out of control, it says, and even its creators cannot “understand, predict, or reliably control” the impact of “powerful digital minds.”

The letter cites common negative scenarios such as AI propaganda, the automation of many jobs, the displacement of human intelligence, and the eventual loss of control over our own civilization as possible risks.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.

More rules for AI

AI labs and independent experts should use the break of at least six months to develop common security guidelines that will be overlooked by outside experts. These guidelines should ensure that systems “are safe beyond a reasonable doubt.”


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top