Oppenheimer made me realize we can’t stop ChatGPT AI from becoming sentient
I was watching Oppenheimer in a packed theater on Tuesday, when a scene from Christopher Nolan’s biopic made me draw a somber parallel with ChatGPT. I realized that we might not have the luxury of knowing when generative AI software becomes sentient. When a future ChatGPT variant, or a competitor, awakens, we might be completely unaware.
That’s one of the worries across the world right now, that AI will lead to some sort of world-ending future. Or that it’ll somehow lead to the extinction of the human race.
Before I go any further, I’ll warn you that some Oppenheimer spoilers follow below. Stop reading here if you haven’t yet seen the movie.
I said recently that most of these worries about ChatGPT AI development are overblown and probably unwarranted. Some of the voices who warn about the dangers of AI are brilliant minds that developed it. We can’t stop AI development even if we want to. Unlike nuclear weapons, anyone can make groundbreaking AI advancements from the comfort of their own home.
In a way, these AI warnings are just like J. Robert Oppenheimer’s worries about nuclear weapons. You know, after he created and successfully used those weapons of mass destruction. He spearheaded the US atomic bomb efforts, and only then did he advocate for the need to regulate nuclear weapon development.
Oppenheimer had reasons to worry after he saw the devastation from Hiroshima and Nagasaki. AI hasn’t killed more than 200,000 people like those atomic bombs did. There’s seemingly no reason to compare AI with the threat of nuclear war.
After seeing Nolan’s biopic, there’s no reason to sympathize with the renowned physicist. Yes, he tried to repent, and there were bigger powers at play that really controlled the fate of atomic bomb research. Had he stepped away before August 6th, 1945, someone else would have taken his place. It was all inevitable at that point. And it was a race between the US and the Nazis to make the bomb.
How does Oppenheimer relate to AI such as ChatGPT? Well, at some point in the movie, Oppenheimer (Cillian Murphy) is shocked to learn that the bomb might lead to an uncontrolled chemical reaction that would light up the atmosphere.
The first nuclear detonation could destroy the entire world. If the calculations are correct, the US would have to disclose its discovery to the Nazis and make it clear to them that anyone using a single atomic bomb might kill all life on Earth.
That was a climactic scene, but since the movie is based on real-life events, everyone in the audience knew those calculations would turn out to be wrong. The world didn’t end when the US ran the Trinity nuclear test. The fallout from the explosion covered almost the entire US, but it didn’t kill the world.
A different scientist then checked the calculations and found that the chance of an atomic bomb destroying the entire world was near zero, though not absolute zero. But until that Trinity test, Oppenheimer and Co. knew there was a theoretical risk the world would burn.
That becomes clear a few years later, right before the Trinity test. Just before the explosion, Oppenheimer informs Leslie Groves (Matt Damon) there’s a non-zero chance they’ll blow up the entire world. That’s when I made the ChatGPT connection.
The Oppenheimer team knew there was a theoretical chance the chain reaction would ignite the atmosphere and destroy life on this planet. The only way to prove the world would survive was to actually detonate the first nuclear weapon.
The Murphy-Damon scene was part of one of the film’s recent trailers (see above), but it makes an impact only when you see the movie, as you have the context for those remarks.
With ChatGPT-like products, AI scientists will get us closer to sentient AI. The Artificial General Intelligence (AGI) we keep hearing of will be indistinguishable from humans. Only once AGI arrives the risk of AI destroying humanity will become real. Well, it’ll be non-zero.
But there’s a chance we might never know the exact moment AGI surfaces. It might sound like a Matrix/Terminator-style scenario, but what if AGI is so smart that it’ll hide its intelligence from humans? We’ll miss the chance to shut it down before it can spread and become unstoppable.
I’m not saying that we should stop or reduce AI development or that I’ll stop using ChatGPT-like products. I want better, more personal AI to be available in the near future of computing. Just like the Oppenheimer team wouldn’t have stopped the Trinity test, I don’t think any AI researchers will or should stop making better AI software.
Scientists might be cautious. But they’re also curious and want to see whether their theories can come true. Whether it’s an atomic bomb that can kill hundreds of thousands of people in a few minutes or advanced AI that can manipulate and mislead.
All I’m saying is that after seeing Oppenheimer, I realized that AI could actually be much more dangerous than the nuclear bomb. But we might never have that moment of clarity when one person tells another that turning on an AI switch for AGI could lead to humankind’s extinction. Even if we do, they’ll still turn AGI on.
Sadly, if AGI does destroy the world, we’ll never get a Nolan biopic that tells the story.
For all the latest Entertainment News Click Here
For the latest news and updates, follow us on Google News.