Site icon TheDailyCheck.net

Mint Explainer: What are PaLM 2, LaMDA and GPT-4, the LLMs powering new AI chatbots?

New Delhi: On 10 May, Google announced new generative AI (artificial intelligence) capabilities for Search and workplace users at its annual developer conference Google I/O. While Google’s generative AI products have been rolling out slowly, at I/O the company seemed ready to finally take the wraps off its AI, adding it to most of its products.

However, while names like Bard and ChatGPT have been turning heads since last year, AI models called large language models (LLMs) are what power these products. At I/O 2023, Google also announced a new LLM called Pre-training with Abstracted Language Modeling-2 (PaLM-2), which will be the underlying technology for many of its new AI tools, including Bard, which is Google’s ChatGPT rival.

At the moment, the company’s generative AI chatbot Bard generates text, translates languages, writes code, and answers complex questions leveraging something called Language Model for Dialogue Applications (LaMDA). PaLM-2 will replace LaMDA.

On the other hand, AI research firm OpenAI’s ChatGPT is powered by Generative Pre-trained Transformer -4 (GPT-4), another LLM. The company’s close association with Microsoft means GPT-4 is behind most of that firm’s AI initiatives, in products like Word, Excel, Edge and more.

These LLMs belong to a class of AI algorithms called Transformers, which were first introduced in 2017 by researchers at Google and the University of Toronto. They are neural networks used for natural language processing and natural language generation since they have the ability to understand the relationship between two sequential data, such as words in a sentence, and generate a response accordingly.

So, how do PaLM-2, LaMDA and GPT-4 differ from each other? Mint explains:

PaLM 2

PaLM 2 is the upgraded version of PaLM 1, which was trained on 540 billion parameters. Though Google hasn’t shared the details of the number of parameters used to train PaLM 2, the big tech firm claims that it has been trained on multilingual text from more than 100 languages, which makes it much better at understanding, generating, and translating complex text, such as idioms, poems, and riddles.

Google also claimed that PaLM 2 is much better at reasoning, logic, and mathematical calculations than its predecessors, as it has been trained on large datasets of scientific papers and web pages with mathematical content. For generating computer code, PaLM 2 has been trained on source code datasets and can handle languages like Python and JavaScript, along with others like Prolog, Fortran, and Verilog.

Further, Google said that PaLM 2 will be available in four different variants so it can be deployed for multiple applications and even devices. For instance, the “Gecko version” of PaLM is so lightweight and fast that it can be run on mobile devices, and can be used for interactive applications even when a device is offline.

LaMDA

Introduced in 2021, LaMDA has been trained on text-based conversations and is especially designed for dialogue-based applications like AI chatbots. The objective of LaMDA was to build chatbots that can handle more open-ended conversations. LaMDA’s training process included pre-training and fine-tuning and involved 1.56 trillion words with 137 billion parameters. In LLMs, a parameter is a numerical value used to measure the link between two neurons in a neural network. More parameters make LLMs more complex and indicate that it can process more information. Last August, a Google engineer claimed that LaMDA had become sentient or self-aware after it started responding to conversations on rights and personhood. Google dismissed the claim and suspended the engineer.

GPT-4

OpenAI’s GPT-4 is the most advanced LLM built by the Microsoft-backed AI startup, even though its most successful product ChatGPT was based on GPT-3.5. The new model has been used in Microsoft’s AI-powered Bing chat and ChatGPT Plus, the upgraded and subscription-only version of ChatGPT.

Like Google, OpenAI also didn’t share the number of parameters used for training its latest LLM. However, it is believed that it has been trained on a larger dataset than GPT-3, which was trained on 175 billion parameters. At the time of its release, in March, OpenAI claimed that GPT-4 can solve difficult problems with greater accuracy due to its broader general knowledge. According to OpenAI, GPT-4 is more reliable, creative, and can handle more nuanced instructions than GPT-3.5.

What also sets it apart from other models is that it is multi-modal, which means it can generate content from text and image prompts. OpenAI has said that internal tests showed GPT-4 is 82% less likely to respond to requests for problematic content and 40% more likely to generate accurate responses than GPT-3.5.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@thedailycheck.net The content will be deleted within 24 hours.
Exit mobile version