AI chatbots like Bard, ChatGPT stoke fears of misinformation nightmare

The outsize investments into generative pre-trained bots like Microsoft-backed ChatGPT and Google’s Bard is stoking fears of a spike in misinformation, according to industry experts. Social media intermediaries will also be hard-pressed to identify “fake” content and arrest its spread on time, they added.

India, which is among the world’s largest data markets, is already battling a flood of misinformation that is worsened by the multiplicity of languages. Generative AI technologies can further accentuate the problem according to technologists tracking the rise of these applications.

“The potential for misinformation is huge because these AI large language models are not designed for factual accuracy, they are designed for eloquent conversations,” Simon Greenman, Co-Founder, Partner and CTO of Best Practice AI told ET.

Noting that AI large language models cannot differentiate between “proof backed information and fiction,” Greenman said “ the fear that we have, from a societal perspective, is that it can amplify toxic content, racism, violence, misogyny, hate speech and political theories that are incorrect and biased.”

ChatGPT, built by SanFrancisco-based Open.AI has managed to rake in 100 million users within two months of its launch.

Also read | OpenAI launches ChatGPT Plus: check price, features, details of the disruptive AI chatbot

Discover the stories of your interest


In January this year, Microsoft announced a new multiyear, multibillion-dollar investment to back the technology. Microsoft declined to provide the specifics of the deal but Semafor reported that Microsoft was in talks to invest as much as $10 billion. The deal marked the third phase of the partnership between the two companies, following Microsoft’s previous investments in 2019 and 2021.

Earlier this month, Google announced that it would roll out Bard, a conversational bot powered by a ‘lightweight version’ of Google’s Language Model for Dialogue Application or LaMDA. Alphabet CEO Sundar Pichai also announced that the LaMDA technology will be integrated into Google’s core search engine business in the future.

Misinformation is not new. But Greenman said the fear with algorithmic driven math large language models is that they suck up the content on the internet, and then regurgitate it.

“So it’s sucking up misogynistic content from some dark reaches of the internet. It’s spewing it back and it amplifies it,” he explained.

Also read | Creepy, harmful, dangerous: Why Microsoft has tamed new ChatGPT-style Bing

As per the National Crime Records Bureau (NCRB), the incidents of fake news and rumour circulation saw nearly a three-fold rise in 2020 over 2019. A total of 1,527 cases of fake news were recorded in 2020, compared to 486 cases in 2019 and 280 cases in 2018.

However, after the pandemic year, there was a 42% drop in the number of cases recorded as per the NCRB data for 2021 wherein 882 cases were registered.

In November last year, the Ministry of Information and Broadcasting secretary Apurva Chandra said that India has over 1.2 billion mobile phone users and 600 million smartphone users which makes it a ripe market for misinformation to be spread.

In fact, Samir Saran, President of the Observer Research Foundation (ORF) tweeted that countries need to clamp down on ChatGPT and other similar products.

“They need to be put through multi- stage testing and various sandboxes before being made available – we are in unchartered territory here and impact could be consequential,” he said on Wednesday.

Experts, however, noted that as the AI-led models are generally built around the English language so most of the immediate impact is going to be on English language speaking countries. But it is only a matter of time before the models get localized to cater to different cultures, countries, nationalities and languages.

Also read | ETtech Preview: Microsoft’s new AI- powered Bing chatbot, here’s what it’s like

In addition, the models were built 12 months ago so if someone asks about something that’s happening in February 2023, the models will not have an answer. Hence it is not very current. Povolny added that ChatGPT will not remain as accessible and that could be a deterrent for bad actors.

“My top concern is around ChatGPT’s ability to make eloquent what formerly was not necessarily eloquent,” said Steve Povolny, Principal Engineer & Head of Advanced Threat Research of cybersecurity company Trellix.

Global concerns

Spanish fact checkers are gearing up to deal with the potential onslaught of misinformation as the country inches closer to two elections – a national and regional one. Irene Larraz and her team work at one such firm based in Madrid called Newtral.

“A potential risk that we are preparing for is that people whose aim is to distract fact checkers can use ChatGPT to spread the same misinformation written in different formats so even though the input may be the same, the result is numerous articles saying the same thing in different ways and we cannot detect it in all the platforms or all the places where is it spread,” she explained.

Larraz said a little while ago there was an article that was published in a Spanish newspaper wherein a journalist asked Bing’s language processing if the President of Spain sports a moustache and the tool said he does but in reality, Pedro Sánchez does not have a moustache.

“This is something small and silly that we may not care about but if ChatGPT has the potential to invent answers for larger issues that can provoke polarization, it is a huge concern especially when it comes to sensitive topics,” she said.

Also read | ChatGPT, Bard & Ernie: The three musketeers of AI

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.