OpenAI Just Formed A Team To Keep Superintelligent AI Under Control – SlashGear
The biggest challenge that comes with a superintelligent AI is that it can’t be controlled with the same kind of human supervision methods that are deployed for current-gen models like GPT-4, which powers products like ChatGPT. For systems that are smarter than humans, Superalignment proposes the use of AI systems to evaluate other AI systems and aims to automate the process of finding anomalous behavior. The team would employ adversarial techniques like testing a pipeline by willingly training it with misaligned models.
OpenAI is dedicating 20% of the entire compute resources toward the Superalignment team’s goal over the next four years. OpenAI’s plans of “aligning superintelligent AI systems with human intent” come at a time when calls for pausing the development of existing models like GPT-4 have been raised from industry stalwarts like Elon Musk, Steve Wozniak, and top scientists across the world citing threats posed to humanity.
Meanwhile, regulatory bodies are also scrambling to formulate, and implement, guardrails so that AI systems don’t end up becoming an uncontrolled mess. But when it comes to superintelligent AI, there’s a whole world of uncertainty about the capabilities, risk potential, and whether it is even feasible. Fascinating collaborative research by experts at the University of California San Diego, Max-Planck Institute for Human Development, ORGIMDEA Networks Institute of Madrid, and the University of Chile concluded that a superintelligent system would be impossible to contain and that the containment hypothesis for such a system itself is incomputable.
For all the latest Gaming News Click Here
For the latest news and updates, follow us on Google News.