Site icon TheDailyCheck.net

With AI, let’s not move fast and break things

Merriam-Webster notes that a “Pandora’s box” can be “anything that looks ordinary but may produce unpredictable harmful results.” I’ve been thinking a lot about Pandora’s boxes lately, because we Homo sapiens are doing something we’ve never done before: lifting the lids on two giant Pandora’s boxes at the same time, without any idea of what could come flying out.

One of these Pandora’s boxes is labeled “artificial intelligence,” and it is exemplified by the likes of ChatGPT, Bard and AlphaFold, which testify to humanity’s ability for the first time to manufacture something in a godlike way that approaches general intelligence, far exceeding the brainpower with which we evolved naturally.

The other Pandora’s box is labeled “climate change,” and with it we humans are for the first time driving ourselves in a godlike way from one climate epoch into another. Up to now, that power was largely confined to natural forces involving the earth’s orbit around the sun.

For me the big question, as we lift the lids simultaneously, is: What kind of regulations and ethics must we put in place to manage what comes screaming out?

Let’s face it, we did not understand how much social networks would be used to undermine the twin pillars of any free society – truth and trust. So if we approach generative AI just as heedlessly – if we again go along with Mark Zuckerberg’s reckless mantra at the dawn of social networks, “move fast and break things” – oh, baby, we are going to break things faster, harder and deeper than anyone can imagine.

“There was a failure of imagination when social networks were unleashed and then a failure to responsibly respond to their unimagined consequences once they permeated the lives of billions of people,” Dov Seidman, the founder and chair of the HOW Institute for Society and LRN, told me. “We lost a lot of time – and our way – in utopian thinking that only good things could come from social networks, from just connecting people and giving people a voice. We cannot afford similar failures with artificial intelligence.”

Discover the stories of your interest


So there is “an urgent imperative – both ethical and regulatory – that these artificial intelligence technologies should only be used to complement and elevate what makes us uniquely human: our creativity, our curiosity and, at our best, our capacity for hope, ethics, empathy, grit and collaborating with others,” added Seidman (a board member of the museum my wife founded, Planet Word). “The adage that with great power comes great responsibility has never been more true. We cannot afford another generation of technologists proclaiming their ethical neutrality and telling us, ‘Hey, we’re just a platform,’ when these AI technologies are enabling exponentially more powerful and profound forms of human empowerment and interaction.”

For those reasons, I asked James Manyika, who heads Google’s technology and society team, as well as Google Research, where much of its AI innovation is conducted, for his thinking on AI’s promise and challenge.

“We have to be bold and responsible at the same time,” he said.

“The reason to be bold is that in so many different realms AI has the potential to help people with everyday tasks, and to tackle some of humanity’s greatest challenges – like health care, for instance – and make new scientific discoveries and innovations and productivity gains that will lead to wider economic prosperity.”

It will do so, he added, “by giving people everywhere access to the sum of the world’s knowledge – in their own language, in their preferred mode of communication, via text, speech, images or code,” delivered by smartphone, through television, radio or e-book. A lot more people will be able to get the best assistance and the best answers to improve their lives.

But we also must be responsible, Manyika added, citing several concerns. First, these tools need to be fully aligned with humanity’s goals. Second, in the wrong hands, these tools could do enormous harm, whether we are talking about disinformation, perfectly faked things or hacking. (Bad guys are always early adopters.)

Finally, “the engineering is ahead of the science to some degree,” Manyika explained. That is, even the people building these so-called large language models that underlie products like ChatGPT and Bard don’t fully understand how they work and the full extent of their capabilities. We can engineer extraordinarily capable AI systems, he added, that can been shown a few examples of arithmetic, or a rare language or explanations of jokes, and that then can start to do many more things with just those fragments astonishingly well. In other words, we don’t fully understand yet how much more good stuff or bad stuff these systems can do.

So, we need some regulation, but it needs to be done carefully and iteratively. One size will not fit all.

Why? Well, if you are most worried about China beating America in AI, you want to turbocharge our AI innovation, not slow it down. If you want to truly democratize AI, you might want to open-source its code. But open-sourcing can be exploited. What would the Islamic State group do with the code? So, you have to think about arms control. If you are worried that AI systems will compound discrimination, privacy violations and other divisive societal harms, the way social networks do, you want regulations now.

If you want to take advantage of all the productivity gains AI is expected to generate, you need to focus on creating new opportunities and safety nets for all the paralegals, researchers, financial advisers, translators and rote workers who could be replaced today, and maybe lawyers and coders tomorrow. If you are worried that AI will become superintelligent and start defining its own goals, irrespective of human harm, you want to stop it immediately.

That last danger is real enough that on Monday, Geoffrey Hinton, one of the pioneering designers of AI systems, announced that he was leaving Google’s AI team. Hinton said that he thought Google was behaving responsibly in rolling out its AI products but that he wanted to be free to speak out about all the risks. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times’ Cade Metz.

Add it all up and it says one thing: We as a society are on the cusp of having to decide on some very big trade-offs as we introduce generative AI.

And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever – the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.

Because the wider we scale artificial intelligence, the more the golden rule needs to scale: Do unto others as you would wish them to do unto you. Because given the increasingly godlike powers we’re endowing ourselves with, we can all now do unto each other faster, cheaper and deeper than ever before.

Ditto when it comes to the climate Pandora’s box we’re opening. As NASA explains on its website, “In the last 800,000 years, there have been eight cycles of ice ages and warmer periods.” The last ice age ended some 11,700 years ago, giving way to our current climate era – known as the Holocene (meaning “entirely recent”) – which was characterized by stable seasons that allowed for stable agriculture, the building of human communities and ultimately civilization as we know it today.

“Most of these climate changes are attributed to very small variations in Earth’s orbit that change the amount of solar energy our planet receives,” NASA notes.

Well, say goodbye to that. There is now an intense discussion among environmentalists – and geological experts at the International Union of Geological Sciences, the professional organization responsible for defining Earth’s geological/climate eras – about whether we humans have driven ourselves out of the Holocene into a new epoch, called the Anthropocene.

That name comes “from ‘anthropo,’ for ‘man,’ and ‘cene,’ for ‘new’ – because humankind has caused mass extinctions of plant and animal species, polluted the oceans and altered the atmosphere, among other lasting impacts,” an article in Smithsonian Magazine explained.

Earth system scientists fear that this man-made epoch, the Anthropocene, will have none of the predictable seasons of the Holocene. Farming could become a nightmare.

But here is where AI could be our savior – by hastening breakthroughs in material science, battery density, fusion energy and safe modular nuclear energy that enable humans to manage the impacts of climate change that are now unavoidable and to avoid those that would be unmanageable.

But if AI gives us a way to cushion the worst effects of climate change – if AI, in effect, gives us a do-over – we had better do it over right. That means with smart regulations to rapidly scale clean energy and with scaled sustainable values. Unless we spread an ethic of conservation – a reverence for wild nature and all that it provides us free, like clean air and clean water – we could end up in a world where people feel entitled to drive through the rainforest now that their Hummer is all-electric. That can’t happen.

Bottom line: These two big Pandora’s boxes are being opened. God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@thedailycheck.net The content will be deleted within 24 hours.
Exit mobile version