Why tech pioneers are in a bind about AI
Tech experts led by OpenAI CEO Sam Altman, whose ChatGPT broke the internet are insisting that the very technology they helped nurture should now be heavily controlled, citing pandemics and nuclear war. Is this a case of fear of the future? Mint explores:
Who exactly is worried about AI?
Historian and writer Yuval Noah Harari, who has been consistent in his criticism of AI says he’s not sure if humans can survive AI. He is one of thousands of thinkers and others including Elon Musk, Yoshua Bengio, Gary Marcus and Andrew Yang, who have called for a six-month moratorium on training systems “more powerful than GPT-4″. On 30 May, a large number of technology experts reiterated their view that “mitigating the risk of extinction from AI should be a global priority”. Ironically, some of these very experts were singing the praises of Generative AI until a couple of months ago.
What do these experts fear about AI?
Centre for Humane Technology authors Tristen Harris and Aza Raskin underscore that the strength of Generative AI is that it can treat everything as a language—even the DNA that makes us unique. This implies that an advancement in any one part of the AI world can have an impact in every part of the AI world, creating a network effect. There’s no ignoring the fact that the Generative AI treadmill is moving at a furious pace: ChatGPT alone has more than 100 million users. Further, AI is now able to write codes on its own. All of this has prompted calls for local and global guardrails.
Are there experts who disagree?
Reacting to the AI backlash, Christopher Manning, director at StanfordAILab, tweeted on 31 May that AI risks and harms can be minimized with “careful engineering” and regulation. Yann LeCun, chief AI scientist at Meta and the so-called “godfather” of AI, also supports this line of thinking. He responded with a cryptic tweet: “The silent majority of AI”.
Can generative AI reason and think?
Generative AI refers to AI models like ChatGPT, Dall-E, Mid-Journey, BingChat, etc which can create text, images, videos and code. Philip Goff, associate professor of philosophy at Durham University, argues that while large language models (LLMs) are too complex for us to fully understand, “…we can confidently assert that ChatGPT is not conscious”. LLMs are systems trained to give the outward appearance of human intelligence but are not so in reality. Not all will agree with his assessment.
What’s the trend in AI regulation?
Governments want AI to address data privacy, security, algorithmic transparency, bias, diversity and accountability. Canada’s rules include standardizing AI design and development of private companies across provinces and territories. The US AI Bill of Rights calls for greater data privacy but is not binding. The EU categorizes AI as ‘unacceptable risk’ and ‘minimal risk’. India’s Digital India Act, when notified, is expected to regulate AI and intermediaries that are high-risk AI but there’s no separate legislation for AI so far.
Download The Mint News App to get Daily Market Updates & Live Business News.
More
Less
Updated: 01 Jun 2023, 11:48 PM IST
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.