AI experts reveal ways it could cause catastrophes involving killer robots& more

ARTIFICIAL intelligence experts have concluded some risks that may come with the technology as it rapidly accelerates.

AI technology comes with a lot of perks but we must be aware of the dangers as well.

The Center for AI Safety (CIAS) insists certain aspects need to be considered and monitored as AI technology becomes more advanced

1

The Center for AI Safety (CIAS) insists certain aspects need to be considered and monitored as AI technology becomes more advanced

The Center for AI Safety (CIAS) released a paper on Monday called “An Overview of Catastrophic AI Risks” and discussed killer robots, deadly bioweapons, uncontrollable machines and more.

CAIS is a tech nonprofit that works to reduce “societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards,” its website states.

Tech experts with CAIS Dan Hendrycks, Mantas Mazeika, and Thomas Woodside came together to write the paper.

The paper highlighted that “as with all powerful technologies, AI must be handled with great responsibility to manage the risks and harness its potential for the betterment of society.”

The tech experts hope this information will help inform the public and government leaders about AI’s impacts.

KILLER ROBOTS

There is a serious concern that we might lose control over AI as they become more intelligent than we are.

“AI could optimize flawed objectives to an extreme degree in a process called proxy gaming. AI could experience ‘goal drifts’ as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives,” the researchers explained in the paper.

“Although walking, shooting robots have yet to replace soldiers on the battlefield, technologies are converging in ways that may make this possible in the near future.”

DEADLY BIOWEAPONS

It is important to be aware of who is in charge of medical technologies with AI as it could lead to bioweapons being created if it ends up in the wrong hands.

“A single research team might be excited to open source an AI system with biological research capabilities, which would speed up research and potentially save lives, but this could also increase the risk of malicious use of the AI system could be repurposed to develop bioweapons,” the paper said.

The researchers insist that AI developers must be held liable for damages to reduce mishandling.

“To reduce these risks, we suggest improving biosecurity, restricting access to the most dangerous AI models, and holding AI developers legally liable for damages caused by their AI systems,” the researchers suggest.

UNCONTROLABLE MACHINES

The research suggested that the “AI race” among people who want to have the best technology may lead to rushing certain things.

Which could lead to losing control of AI.  

“The immense potential of AI has created competitive pressures among global players contending for power and influence,” the paper said.

“This ‘AI race’ is driven by nations and corporations who feel they must rapidly build and deploy AI to secure their positions and survive.”

This could lead to “more destructive wars, the possibility of accidental usage or loss of control, and the prospect of malicious actors co-opting these technologies for their own purpose.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.