OpenAI bosses share some thoughts on how to control a superintelligence



summary
Summary

Everyone is talking about ChatGPT and AI text generation. But OpenAI’s real goal is to develop an artificial superintelligence. The company reminds us of its distant goal – and suggests how to keep such a system under control.

Straight from OpenAI’s executive suite comes an article on how to control a potential super-AI: Sam Altman, Greg Brockman, and Ilya Sutskever are the authors.

They discuss possible control systems for superintelligent AI systems. By their definition, these are future AI systems that will be “dramatically more powerful” than even “artificial general intelligence” (AGI), although they do not specify the term “superintelligence”.

Altman, Brockman, and Sutskever suggest that the impact of artificial superintelligence will be far-reaching, both positive and negative, and compare the potential consequences to those of nuclear energy or synthetic biology. In the next decade, AI systems will “outperform experts in most fields and do as much productive work as the largest companies do today.

ad

Coordination, control, technology

To effectively control superintelligence, they suggest three starting points:

  • Coordination: Leading super-AI development efforts would need to be coordinated to ensure the safe and smooth integration of superintelligent systems into society. This could be done through a global project launched by major governments, or through a collective agreement to limit the rate of growth of AI capabilities.
  • Regulation: OpenAI reiterates its call at the US Senate hearing for a regulatory agency similar to the International Atomic Energy Agency (IAEA). Such an agency would be responsible for overseeing superintelligence. It would inspect systems, require audits, enforce security standards, and set usage restrictions and security levels.
  • Technical solutions: To make superintelligent AI safe, humanity would also need to develop technical capabilities. This is an open research question.

While the three OpenAI leaders support strict regulation of superintelligence, they also emphasize the need for a clear boundary that allows companies and open-source projects to develop models below a significant capability threshold without regulation.

“The systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.”

Sam Altman, Greg Brockman, Ilya Sutskever

Humans should be in charge of AI

Altman, Brockman, and Sutskever emphasize the importance of public participation and oversight in governing powerful AI systems. In their view, the limits and goals of these systems should be democratically determined.

Within these broad limits, however, users must have “a lot of control” over the AI ​​system they use. OpenAI CEO Altman has previously announced that his company plans to offer customizable AI models in the future.

Finally, the authors justify developing artificial superintelligence despite all the risks: it could potentially lead to a “much better world” than we can imagine today. Examples are already visible in education, creativity, and productivity, they claim.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top