AI poses ‘risk of extinction,’ industry leaders warn

A group of industry leaders warned Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in AI.

The signatories included top executives from three of the leading AI companies: Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern AI movement, signed the statement, as did other prominent researchers in the field.

The statement comes at a time of growing concern about the potential harms of AI. Recent advancements have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building – and, in many cases, are furiously racing to build faster than their competitors – poses grave risks and should be regulated more tightly.

Discover the stories of your interest


This month, Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to talk about AI regulation. In a Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms. Dan Hendrycks, executive director of the Center for AI Safety, said that the open letter represented a “coming-out” for some industry leaders who had expressed concerns – but only in private – about the risks of the technology they were developing.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.