The Booty Report

News and Updates for Swashbucklers Everywhere

Avast ye scallywags! The co-founder o' OpenAI be warnin' that the mighty 'superintelligent' AI be needin' some control, lest it be sendin' us all to Davy Jones' locker!

2023-07-07

Aye, me mateys! Avast ye! One o' them OpenAI founders be cautionin' us landlubbers 'bout the dangers o' superintelligence runnin' wild! If we don't be keepin' an eye on it, we'll be sendin' ourselves to Davy Jones' locker, arrr!

In a humorous tone, a co-founder of artificial intelligence leader OpenAI has expressed the need to control superintelligence to prevent the extinction of the human race. According to Ilya Sutskever and head of alignment Jan Leike, superintelligence has the potential to be both impactful and dangerous. While it could help solve important global problems, it also carries the risk of disempowering humanity or leading to human extinction. The co-founders believe that such advancements could arrive in the near future.

To manage these risks, they propose the establishment of new governance institutions and the solution of the superintelligence alignment problem. Ensuring that AI systems smarter than humans follow human intent is crucial. However, current techniques for aligning AI, such as reinforcement learning from human feedback, will not be effective with superintelligence. They emphasize the need for scientific and technical breakthroughs to address this challenge.

In their efforts to find solutions, Sutskever and Leike are leading a new team dedicated to this issue. They plan to allocate 20% of the compute power they have secured to date for this endeavor. Although they acknowledge the ambitious nature of their goal and the uncertainty of success, they remain optimistic about the potential of a focused and concerted effort.

OpenAI's work also includes improving their current models and mitigating risks while focusing on the machine learning challenges of aligning superintelligent AI systems with human intent. The company aims to develop an automated alignment researcher that matches the level of humans and then scale it to align superintelligence. They will also develop a scalable training method, validate the resulting model, and stress test their alignment pipeline.

It is worth noting that OpenAI's efforts are backed by Microsoft.

Read the Original Article