Why AI’s top minds think it could end humanity, and how we can stop it | Globalnews.ca

Roughly six months ago, ChatGPT was released to the public. In two months, it hit an astounding 100 million monthly active users. Three months after that, at least 1,000 tech leaders and AI experts called for a moratorium on developing artificial intelligence (AI) models stronger than GPT-4.

Now, some of those same top minds — including two of its godfathers, Yoshua Bengio and Geoffrey Hinton — are saying that AI could even wipe out humanity.

Just like the technology itself, the conversation around AI has evolved at a breathtaking pace. A year ago, most people had never heard of a large language model, AI was still being called machine learning and the extinction-level dangers we feared were climate change, nuclear war, pandemics and natural disasters.

There’s enough to be worried about as-is. Should an AI apocalypse add to your list of stressors?

“Yes, the average Canadian needs to be worried about artificial general intelligence (AGI) development,” Bengio told Global News, referring to a hypothetical AI model that could reason through any task a human could. “It’s not like we have AGI now, but we have something that’s approaching it.”

Bengio’s fundamental research in deep learning helped lay the groundwork for our modern AI technology, and it’s made him one of the most cited computer scientists in the world.

He speculates it could take us anywhere from a few years to a decade to develop AGI. He speaks with certainty that the technology is coming. And once we create an AI model that resembles human intelligence — not just in general knowledge but in our ability to reason and understand — then it won’t take long for it to surpass our own intelligence.

If humans lose our edge as the most intelligent beings on Earth, “How do we survive that?” Hinton once asked in an interview with MIT Technology Review.

But not everyone agrees.

Some critics say AI doomsayers are overstating how quickly the technology will improve, all while giving tech giants free publicity. Especially, they highlight the current harms caused by AI and worry that talk of human extinction will distract from the problems we have before us already.

For instance, Google still hasn’t been able to fix a problem with Google Lens that caused controversy in 2015 when it identified photos of Black people as gorillas. (Google Lens still avoids labelling anything a primate, eight years later, the New York Times reported.) And yet, the French government is confident enough in computer vision that it plans to deploy AI-assisted drones to scan for threatening crowd behaviour at the upcoming Olympics.

AI image and voice generators are already being used to sow disinformation, create non-consensual pornography, and scam unwitting people, among a legion of other issues.


Click to play video: 'Better Business Bureau warns of artificial intelligence scams'


Better Business Bureau warns of artificial intelligence scams


It’s clear the danger today is real. But for Bengio, the risks of tomorrow are so dire that we would be unwise to ignore them.

“Society takes time to adapt, whether it’s legislation or international treaties, or even worse, having to rethink our economic and political system to address the risks,” Bengio warns.

If we can start the conversation now about how to prevent some of the future’s biggest problems, Bengio thinks we should.

So, how do we build an AI that doesn’t wipe out humanity? How do we get global consensus on using AI responsibly? And how do we stop the AI we already have from causing catastrophe?

Problem No. 1: Rogue AIs

There are a number of ways in which AI could theoretically cause an extinction-level event, and they don’t all require a superintelligent AGI. But starting from the top, the scariest, but also most unlikely scenario, is what AI ethicists call the “control problem” — the idea that an AI could go rogue and threaten humanity.

At its core, this fear boils down to humans losing our competitive edge at the top of the food chain. If we are no longer the smartest and most capable, is our time in the driver’s seat over?

AI ethicist and philosopher of science and technology Karina Vold elaborates on this with an analogy called “the gorilla problem.”

“In evolutionary history, we’re really, really similar to gorillas. We’re just a little bit smarter. Let’s say we have slightly more competitive advantages, but those were enough. That small variation was enough. That’s put us in a position where we now basically decide what happens to gorillas. We decide if they live, if they die, if the species ends, if it aligns with our values.”

Thankfully the continued existence of gorillas does align with human values of biodiversity, she notes, though we’ve caused a great number of other species to go extinct.

“But the analogy is that if something like that happens with an AI system and we don’t have the appropriate type of control over that system or it somehow becomes smarter than us, then we might end up in a position where we’re one of the extinct species now.”

This doesn’t necessarily translate to a Terminator-Skynet-style dystopia where autonomous robots wipe us all out. Even if an AGI didn’t have free agency in our world but was able to communicate with us, it could potentially manipulate humans to achieve its goals. But why would an AGI even want to destroy us in the first place?

A more reasonable fear is that AGI would be apathetic towards humans and human values. And if we give it a poorly defined goal, it could turn out something like The Monkey’s Paw: ask for one thing, and you might get what you wish for, but with other unforeseen consequences.

“For example, we may ask an AI to fix climate change and it may design a virus that decimates the human population because our instructions were not clear enough on what harm meant, and humans are actually the main obstacle to fixing the climate crisis,” Bengio says.


Yoshua Bengio, founder and scientific director, Mila-Quebec AI Institute, discusses artificial intelligence, democracy and the future of civilization at the C2MTL conference on May 24, 2023, in Montreal. THE CANADIAN PRESS/Christinne Muschi.


CMU

So how do we mitigate these risks? The out-of-control AGI scenario presupposes two things — that the AGI would have some access to our world (say to build killer robots directly or get on the internet and convince a human to do it) and that it has goals it wishes to execute.

Bengio posits we can build AI systems that circumvent these two problems entirely.

Solution No. 1: AI scientists

For Bengio, the safest way to build AI systems is to model them after idealized, impartial scientists. They would not have autonomous access to the world and would not be driven by goals; instead, they would focus on answering questions and building theories.

“The idea of building these scientists is to try to get the benefits of AI, the scientific knowledge that would allow us to cure all kinds of medical problems and fix problems in our environment and so on, but (the AI would) not actually do it themselves. Instead, it would answer questions from engineers and scientists who then will use that information in order to do things. And so there will always be a human in the loop that makes the moral decision.”

These AI systems would have no need for goals, they wouldn’t even need to have knowledge-seeking as a prerogative, Bengio argues. And this gets us around the problem of AI creating subgoals not aligned with human needs.

“The algorithms for training such AI systems focus purely on truth in a probabilistic sense. They are not trying to please us or act in a way that needs to be aligned with our needs. Their output can be seen as the output of ideal scientists, i.e., explanatory theories and answers to questions that these theories help elucidate, augmenting our own understanding of the universe,” he writes.

Building an AGI that isn’t autonomous and doesn’t have goals is all well and good in theory, but all it takes is one country, one company or even one person to build a model that doesn’t follow these rules for the rogue AGI danger to rear its ugly head again.


Click to play video: 'Ontario urged to develop ‘guardrails’ on public sector use of AI'


Ontario urged to develop ‘guardrails’ on public sector use of AI


And that brings us to our next extinction-level risk. The world is a fractured place, and not every global actor shares the same values on responsible AI.

Problem No. 2: Global disruption

The idea that AI developments could come fast enough and be powerful enough to disrupt the current global order is a much more likely way in which the technology could result in catastrophe.

And we don’t even need to develop AGI for this scenario.

For example, a narrowly-focused AI that’s applied to an advanced weapons system or designed to destabilize political institutions through propaganda and disinformation could lead to tragedy and loss of life.

“It’s plausible that the current brand of large language models like GPT-4 could be used by malicious actors in the next U.S. election, for example, to have a massive effect on voters,” Bengio warns.

AI poses a danger to the global order because of the “downstream effects of having really advanced technologies emerge quickly in political environments that are as unstable as our current global political environments,” Vold explains.

There are incentives everywhere for countries and corporations to put AI safety on the back burner and barrel forwards towards the strongest-possible AI — and with it, the promise of power, money and market share. While big tech says they invite regulations on AI, they’re still investing billions in the technology even as they too warn of the existential risks.


Click to play video: '‘AI gold rush’: Nvidia nears trillion-dollar market cap club'


‘AI gold rush’: Nvidia nears trillion-dollar market cap club


Say a country makes a massive AI breakthrough that no other country has solved. It’s easy to see how the pursuit of national interest could lead such a nation to use its powerful tool unethically.

We’ve already seen this play out with the creation of the nuclear bomb. The only time the atomic bomb was used on a civilian population was when the U.S. was the only country capable of making nuclear weaponry.

Would the U.S. have so easily unleashed the bomb and killed hundreds of thousands if Japan had the ability to respond in kind? As nuclear weapons proliferated in the aftermath of the Second World War, the incentive to use them plummeted. The realities of mutually assured destruction helped revive a global balance of power.

The hope among some AI doomsayers is that something similar to international cooperation on nuclear disarmament, and for instance, human cloning, could play out again with consensus on AI.

Solution No. 2: Building a global consensus on AI

Well, there’s no clear answer here. But for Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, there’s reason to be optimistic.

Ramos doesn’t dwell on AI doom scenarios because she’s “not in the world of predicting outcomes.”

“I’m in the world of trying to correct what I see concretely needs to be done.”

In 2021, she helped oversee the adoption of the first-ever global instrument to promote responsible AI development. All 193 member states voted to adopt UNESCO’s recommendations for ethical AI, which place human rights at the centre of the conversation.

Of course, these are just recommendations, and they’re not legally binding. But it signals a willingness to get on the same page when it comes to AI.

And for Ramos, though companies are mainly driving AI innovations, it’s the responsibility of governments to prevent them from behaving badly.

“The duty of care is with governments, not with the companies. Companies will always take advantage of any loophole, always. It is in their nature that they are there to produce profit. And therefore if you have a space that is not regulated, they will use it,” she said.

The European Union is taking a step to address the AI wild west with an act that could become the most comprehensive AI regulatory framework yet. If approved, any company that wishes to deploy an AI system in the EU would have to abide by it, regardless of where they’re headquartered — this is just one of the ways how multilateral, but not quite global regulations, can still have a wide-reaching effect.


Click to play video: '‘Responsible’ AI rules drafted by European parliament as lawmaker proclaims ‘we have made history’'


‘Responsible’ AI rules drafted by European parliament as lawmaker proclaims ‘we have made history’


Some AI applications promise to be outright banned, like real-time facial recognition systems in public and predictive policing. Other high-impact AI systems like ChatGPT would have to disclose their content is AI-generated and distinguish between real and generated images.

A similar provision has already been seen in draft regulations out of China, which would require companies to tag AI-generated content, video or images so consumers can be aware and protected.

Another overlap between the EU and Chinese draft provisions is regulating what kinds of data can be used to train these AI models. Meanwhile, Canada tabled the Artificial Intelligence and Data Act in 2022, though specific regulations still haven’t been released.


Click to play video: 'EU wants a label on AI-generated content'


EU wants a label on AI-generated content


It’s clear AI regulations are on the agendas of the world’s powers, and not just Western liberal democracies.

“I think humanity can pull it off,” Bengio says. “If we are seeing that there is a risk of extinction for humanity, then everybody can lose. It doesn’t matter if you’re Chinese or Russian or American or European or whatever. We would all lose. Humanity would lose and we might be ready to sit at those tables and even change our political systems in order to avoid these things.”

Will we be OK?

It’s hard to say how things will turn out, but even those like Bengio who are sounding the alarm on AI say there is reason for hope. And while he talks of danger in the future, he’s really looking for solutions today.

One more fear he discussed was the possibility that superintelligent AI technology could become so easily accessible that any person could create their own AGI and use it to wreak havoc. Because of this, Bengio is calling for global access to health care, mental health care, education and more to address instability in our world and prevent the root causes of violence.

“We would need to reduce misery, anger, injustice, disease — you know, all of those things that can lead people to very bad behaviour,” Bengio notes. “So long as they could just use their hands or a gun, it wasn’t too bad. But if they can blow up a continent, or even the whole species, well, we need to rethink how we organize society.”

“I don’t claim to have answers, but I think the most important thing is to be ready to challenge the current status quo.”

Though Vold signed the recent open letter calling for AI extinction risks to be taken as seriously as nuclear war, she really thinks that it’s “much more likely that everything’s going to be fine.”

“When we talked about there being nuclear war, often the rhetoric was catastrophic. It was more likely than not that this wasn’t going to happen, but it was considering those catastrophic scenarios, even existential scenarios that also got people to take the risks seriously,” she notes.

She hopes that if society and governments acknowledge the existential risks, we will see better regulations that address the more near-term concerns.

“That’s one reason why I think that this is not something that we should just ignore. This might be the rallying cry that we actually need.”

For all the latest World News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.