Explore
  • Home
  • Reviews
  • How To
  • Apps
  • Devices
  • Games
Facebook Twitter Instagram
Explore
  • Home
  • News
  • Reviews
  • How To
  • Apps
  • Devices
  • Compares
  • Games
  • Photography
Facebook Twitter Instagram
Explore
Home » Humanity at Risk: Elon Musk and Others Warn Against Hasty ‘Giant AI Experiments’
Technology Information

Humanity at Risk: Elon Musk and Others Warn Against Hasty ‘Giant AI Experiments’

ShahirarBy ShahirarMarch 30, 2023No Comments4 Mins Read

AI experts including Elon Musk have signed an open letter recommending AI labs to halt the development of “gigantic” AI systems, citing the “profound risks” these bots pose to humanity. The letter, published by the nonprofit Future of Life Institute, states that AI labs are currently engaged in an “out-of-control race” to develop machine learning systems that even their creators cannot understand or reliably control. According to the letter, the development of powerful AI systems that can match or exceed human intelligence can pose “profound risks” to society and humanity.

The letter argues that the development of such powerful AI systems should only occur once researchers are confident that their effects will be positive and their risks manageable. The AI experts call for a public and verifiable pause of at least six months on the training of AI systems that are more powerful than OpenAI’s GPT-4 model. If this pause cannot be implemented quickly, governments should step in and institute a moratorium.

The letter was signed by several well-known AI researchers, including Yoshua Bengio, Gary Marcus, Stuart Russell, and Emad Mostaque. Other signatories included Steve Wozniak, Yuval Noah Harari, Jaan Tallinn, and Andrew Yang. The letter was mainly prompted by the release of GPT-4 from San Francisco firm OpenAI, which the company says is much more powerful than its predecessor, GPT-3.

The open letter also calls for the six-month pause to be used to develop safety protocols, AI governance systems, and to refocus research on ensuring AI systems are more accurate, safe, trustworthy, and loyal. The letter quotes from a blog post by OpenAI founder Sam Altman, who suggested that independent review may be needed before training future systems. The authors of the letter wrote, “We agree. That point is now.”

Musk, who was an initial investor in OpenAI and spent years on its board, has been vocal about the risks posed by AI. Tesla, the electric car company he founded, also develops AI systems to power its self-driving technology, among other applications. The open letter was hosted by the Musk-funded Future of Life Institute and was signed by prominent critics as well as competitors of OpenAI, such as Emad Mostaque, the CEO of Stability AI. The letter did not provide specific details about the dangers posed by GPT-4.

The signatories of the letter have also called for a new international treaty to oversee the development of AI, which would establish “binding obligations” for signatory countries and ensure that the technology is developed and used in a responsible and ethical way.

This is not the first time that Elon Musk has warned of the dangers of AI. In 2015, he co-founded OpenAI as a research institute dedicated to developing AI in a safe and beneficial manner. However, in 2018, Musk resigned from the board of OpenAI, citing disagreements over the company’s direction.

In recent years, there have been increasing concerns among experts about the potential risks of AI, particularly as machines become more powerful and autonomous. These risks include the possibility of AI systems causing unintentional harm, being used maliciously by bad actors, or even turning against their creators.

Some experts have also warned about the potential for AI to exacerbate existing inequalities, particularly in areas like healthcare, employment, and criminal justice. They argue that if left unchecked, AI could further entrench biases and discrimination in these areas, and widen the gap between those who benefit from the technology and those who do not.

Despite these concerns, there are many in the tech industry who believe that AI has the potential to revolutionize the world for the better, from improving healthcare outcomes and tackling climate change to creating new economic opportunities and enhancing scientific research.

As the debate over the risks and benefits of AI continues, it is clear that there is a need for greater transparency, accountability, and ethical oversight in the development and deployment of these powerful technologies. Only by working together, across industry, academia, and government, can we ensure that AI is developed in a responsible and beneficial way that serves the needs of all of humanity.

For all latest tech, entertainment and viral news, follow Explore's Google News channel.
against and elon experiments’ giant hasty humanity information musk others risk: technology warn
Latest Post

brisbane roar vs adelaide united

April 22, 2025

The Ultimate Guide to Understanding Weather Patterns

April 12, 2025

“The Ultimate Guide to Understanding Weather Patterns”

April 12, 2025

10 Tips for Predicting Weather Patterns and Staying Prepared

April 11, 2025

New Study: Everything in universe is doomed to evaporate!

June 4, 2023

Nokia Magic Max 2023 set to uproot OnePlus with 200MP camera

June 4, 2023

Difficult challenges await the victorious Erdoğan

June 4, 2023

What is tactical nuclear bomb and it’s significance?

June 3, 2023
Facebook Twitter Instagram Pinterest
  • About Us
  • Contact Us
  • Career
  • Advertise
  • DMCA
  • Privacy Policy
© 2025 ZoomBangla Explore - Powered by ZoomBangla

Type above and press Enter to search. Press Esc to cancel.