Elon Musk and more than 1,000 tech researchers and executives have called for a six-month “pause” on the development of advanced artificial intelligence systems such as OpenAI’s GPT to halt what they call a “dangerous” arms race.
An open letter, published on Wednesday by the Future of Life Institute, a non-profit campaign group, had been signed by more than 1,100 individuals from across academia and the tech industry within hours of its publication.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the letter stated.
Co-signatories include top AI professors Stuart Russell and Yoshua Bengio, the co-founders of Apple, Pinterest and Skype, as well as the founder of AI start-up Stability AI. The Future of Life Institute, which published the letter, counts Musk among its biggest funders and is led by Max Tegmark, a Massachusetts Institute of Technology professor and AI researcher.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the group added.
The letter follows a rush of groundbreaking AI launches over the past five months, including Microsoft-backed OpenAI’s ChatGPT in November and this month’s release of GPT-4, the sophisticated model that underpins the chatbot.
Companies such as Google, Microsoft and Adobe are also adding new kinds of AI features to their search engines and productivity tools, in a move that has put AI into the hands of millions of everyday users.
That accelerating pace of development and public deployment has worried some AI researchers and tech ethicists about the potential impact on employment, public discourse and — ultimately — humanity’s ability to keep up.
The letter urged the creation of shared safety protocols that are audited by independent experts to “ensure that systems adhering to them are safe beyond a reasonable doubt”.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” it stated.
Top of the growing list of signatories to the letter are noted AI researchers Bengio, professor at University of Montreal, and Berkeley professor Russell.
Musk, who was a co-founder of OpenAI but left in 2018 and has since become critical of the organisation, also signed the letter. Others include Apple co-founder Steve Wozniak, author Yuval Noah Harari and former US presidential candidate Andrew Yang.
It also includes several engineers and researchers at Microsoft, Google, Amazon, Meta and Alphabet-owned DeepMind. Nobody identifying themselves as an employee of OpenAI was among the first 1,000 people to sign the letter.
The intervention comes as governments around the world are racing to formulate a policy response to the rapidly evolving field of AI, even as some Big Tech companies are cutting back their AI ethics teams.
The UK on Wednesday will publish a white paper that will ask existing regulators to develop a consistent approach to the use of AI in their respective industries, such as ensuring that AI systems are fair and transparent. However, the government will not provide new powers or fresh funding to regulators at this stage.
The EU is also preparing its own act that will govern how AI is used in Europe, with companies that violate the bloc’s rules facing fines of up to €30mn or 6 per cent of global annual turnover, whichever is larger.
This story has been updated after the Future of Life Institute removed Character.ai founder Noam Shazeer from its letter