top of page
  • Writer's pictureDoubleClickMedia

Exclusive: OpenAI's radical proposal for governing superintelligence - Is this the key to a safe AI


May 24, 2023: OpenAI, the parent company of ChatGPT, has been actively advocating for the regulation of artificial intelligence (AI). In a recent appearance before a Senate panel, CEO Sam Altman emphasized the necessity of regulating AI, warning that the technology has the potential to go awry.

Altman, along with two other senior executives at OpenAI, has now published a blog post underscoring the importance of establishing governance for superintelligence and managing the associated risks of this emerging technology.


The authors assert that the time has come to consider the governance of superintelligence, which refers to future AI systems that will surpass even artificial general intelligence (AGI) in capabilities.


"Superintelligence will possess greater power than any technology humanity has faced before. While we can anticipate a significantly more prosperous future, we must proactively address the risks. We cannot afford to be solely reactive," states the blog post.


The authors advocate for the mitigation of risks associated with current AI technology while recognizing that superintelligence necessitates special treatment and coordination.


The blog post outlines three key approaches to navigate the challenges posed by superintelligence:


1. Coordination


The authors argue that coordination among leading development efforts is crucial to ensure the safe progression of superintelligence and its integration with society. This could involve governments worldwide collaborating on a unified project or collectively agreeing, with the support of a proposed organization, to limit the rate of growth in AI capability at the frontier.


2. Establishment of a governing body


The blog post proposes the creation of a governing body for superintelligence. This international authority would be responsible for inspecting systems, conducting audits, ensuring compliance with safety standards, imposing restrictions on deployment and security levels, and addressing existential risks. The agency's focus should be on reducing such risks rather than delving into matters better handled by individual countries, such as determining the permissible scope of AI's speech.


3. Emphasis on safety


The blog post emphasizes the need to enhance technical capabilities to ensure the safety of AI for everyone. This entails ongoing efforts to develop and implement safeguards and protocols that mitigate risks and potential harm.


During his testimony before US lawmakers, Sam Altman reaffirmed that while OpenAI aims to enhance human lives, it also acknowledges the associated risks. Altman expressed the company's willingness to collaborate with the government to prevent any untoward consequences and stressed the critical role of regulatory intervention in mitigating the risks posed by increasingly powerful AI models.




Комментарии


bottom of page