OpenAI is forming a team to control future AI ‘superintelligence’

ChatGPT creator OpenAI is forming a team to prepare for the emergence of an AI ‘superintelligence.” The company believes as artificial intelligence progresses further, humanity will create an AI system that surpasses people. Ilya Sutskever, the tech firm’s chief scientist and co-founder, will lead other experts to prevent such a disaster.

Advanced artificial intelligence systems used to only appear in science fiction, such as “The Matrix” and “2001: A Space Odyssey.” Nowadays, it seems we will turn these ideas into reality as AI progresses blindingly fast. OpenAI says it “could lead to the disempowerment of humanity or even human extinction if we don’t act immediately.”

AI is in our daily lives, so we must understand how it will transform our future. This article will discuss how OpenAI’s Superalignment team would tackle this global threat.

How will OpenAI beat an AI superintelligence?

The tech firm understands the creation of artificial intelligence smarter than humans is still distant. However, it believes such technology may arrive this decade.

OpenAI requires new institutions for governance, further scientific innovations, and other things to manage its risks. Also, humans cannot reliably supervise an AI system smarter than them.

Nowadays, we don’t have such solutions. Consequently, the ChatGPT creator formed a Superalignment team to develop them. One of its co-founders, Ilya Sutskever, will lead this group of experts to create a human-level automated alignment researcher.

In other words, OpenAI will create another AI to control a future AI “superintelligence.” The Superalignment team will train the former model with the following steps:

  1. The team will provide training signals on tasks humans can’t handle so that AI systems may take over instead. Moreover, Sutskever’s group will understand and control how their models spot tasks humans overlook. Both methods, scalable oversight and generalization, will help develop a scalable training method.
  2. Next, the Superalignment team will validate their AI training or “alignment.” It will automate searching for problematic behavior and internals. The experts call both methods robustness and automated interpretability.
  3. Afterward, the AI pros will test their new alignment researcher by deliberately training misaligned models. Then, the OpenAI team will perform adversarial training by checking if their researcher can spot the latter’s errors.

You may also like: OpenAI asks China to help with AI regulations

Why use another AI to combat a potential AI superintelligence? On August 24, 2022, OpenAI experts Jan Leike, John Schulman, and Jeffrey Wu explained the method in another blog:

“As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now.”

Are other firms preparing for an AI superintelligence?

This is another company working on an ethical AI chatbot.

Photo Credit: bloomberg.com

Most people know OpenAI, but other companies have been creating safer AI tools. For example, Anthropic launched Claude, an AI chatbot that rivals ChatGPT.

It claims it can do everything OpenAI’s tool can, except it avoids “harmful outputs.” Unlike ChatGPT, Claude has a “constitutional AI” model, requiring the chatbot to follow 10 principles. The AI firm said its tool follows laws based on three concepts:

  1. Beneficence or maximizing positive impact
  2. Nonmaleficence or avoiding giving harmful advice.
  3. Autonomy or respecting freedom of choice.

You may also like: How to protect your data from Google AI

Meanwhile, another AI separate from Claude answers questions while referring to these principles. Then, it chooses answers that correspond to the AI constitution.

Anthropic uses the final results to train Claude. Despite focusing on ethical standards, it performs well as an AI chatbot. In January 2023, the chatbot impressed a Virginia’s George Mason University professor by passing college exams.

It earned a “marginal pass,” so the professor praised the program. Professor Alex Tabarrok said Claude produced answers to his law and economics exams “better than many human responses.”

Conclusion

OpenAI is developing an AI team to prepare for a potential AI superintelligence. It will train an AI model to detect and correct such a technology to mitigate global risks.

You may learn more about the Superalignment team’s work by reading OpenAI’s latest blog. It explains the potential limitations of their work since the technology continues to develop.

The AI trend continues to shift daily life worldwide, so everyone must prepare with the latest digital tips and trends. Read more about them at Inquirer Tech.



Your subscription could not be saved. Please try again.



Your subscription has been successful.


Read Next

Don’t miss out on the latest news and information.

Subscribe to INQUIRER PLUS to get access to The Philippine Daily Inquirer & other 70+ titles, share up to 5 gadgets, listen to the news, download as early as 4am & share articles on social media. Call 896 6000.

For feedback, complaints, or inquiries, contact us.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.