AI experts including Elon Musk have signed an open letter recommending AI labs to halt the development of “gigantic” AI systems, citing the “profound risks” these bots pose to humanity. The letter, published by the nonprofit Future of Life Institute, states that AI labs are currently engaged in an “out-of-control race” to develop machine learning systems that even their creators cannot understand or reliably control. According to the letter, the development of powerful AI systems that can match or exceed human intelligence can pose “profound risks” to society and humanity.
The letter argues that the development of such powerful AI systems should only occur once researchers are confident that their effects will be positive and their risks manageable. The AI experts call for a public and verifiable pause of at least six months on the training of AI systems that are more powerful than OpenAI’s GPT-4 model. If this pause cannot be implemented quickly, governments should step in and institute a moratorium.
The letter was signed by several well-known AI researchers, including Yoshua Bengio, Gary Marcus, Stuart Russell, and Emad Mostaque. Other signatories included Steve Wozniak, Yuval Noah Harari, Jaan Tallinn, and Andrew Yang. The letter was mainly prompted by the release of GPT-4 from San Francisco firm OpenAI, which the company says is much more powerful than its predecessor, GPT-3.
The open letter also calls for the six-month pause to be used to develop safety protocols, AI governance systems, and to refocus research on ensuring AI systems are more accurate, safe, trustworthy, and loyal. The letter quotes from a blog post by OpenAI founder Sam Altman, who suggested that independent review may be needed before training future systems. The authors of the letter wrote, “We agree. That point is now.”
Musk, who was an initial investor in OpenAI and spent years on its board, has been vocal about the risks posed by AI. Tesla, the electric car company he founded, also develops AI systems to power its self-driving technology, among other applications. The open letter was hosted by the Musk-funded Future of Life Institute and was signed by prominent critics as well as competitors of OpenAI, such as Emad Mostaque, the CEO of Stability AI. The letter did not provide specific details about the dangers posed by GPT-4.
This is not the first time that Elon Musk has warned of the dangers of AI. In 2015, he co-founded OpenAI as a research institute dedicated to developing AI in a safe and beneficial manner. However, in 2018, Musk resigned from the board of OpenAI, citing disagreements over the company’s direction.
In recent years, there have been increasing concerns among experts about the potential risks of AI, particularly as machines become more powerful and autonomous. These risks include the possibility of AI systems causing unintentional harm, being used maliciously by bad actors, or even turning against their creators.
Some experts have also warned about the potential for AI to exacerbate existing inequalities, particularly in areas like healthcare, employment, and criminal justice. They argue that if left unchecked, AI could further entrench biases and discrimination in these areas, and widen the gap between those who benefit from the technology and those who do not.
Despite these concerns, there are many in the tech industry who believe that AI has the potential to revolutionize the world for the better, from improving healthcare outcomes and tackling climate change to creating new economic opportunities and enhancing scientific research.
As the debate over the risks and benefits of AI continues, it is clear that there is a need for greater transparency, accountability, and ethical oversight in the development and deployment of these powerful technologies. Only by working together, across industry, academia, and government, can we ensure that AI is developed in a responsible and beneficial way that serves the needs of all of humanity.