International Agency should monitor AI models, according to Sam Altman

by

in

1. OpenAI CEO Sam Altman advocates for an international agency to monitor powerful future AI models for safety.
2. Altman believes that oversight is necessary to prevent superintelligent AI from causing global harm.
3. Altman suggests that an agency-based approach is better than legislating AI, and hints at potential improvements to GPT-4 rather than a GPT-5 release.

OpenAI CEO Sam Altman believes that an international agency should be established to monitor powerful future frontier AI models to ensure safety. He warns that as AI systems become more advanced, they have the potential to cause significant global harm. While the US and EU have been passing legislation to regulate AI, Altman doubts that inflexible laws can keep pace with the rapid advancements in AI technology. He suggests international oversight similar to nuclear regulation to prevent superintelligent AI from spiraling out of control.

Altman emphasizes the need for oversight of powerful AI models but also cautions against overregulation, as it could hinder innovation. He proposes an international agency to monitor the most powerful AI systems and ensure safety testing. Altman notes that this approach would be more effective than trying to legislate AI, as laws may quickly become outdated due to the fast-paced nature of AI advancements.

When asked about the release date of GPT-5, Altman hinted that OpenAI takes its time with major model releases and it may not be called GPT-5. Instead, he suggests that incremental improvements will be made to the existing GPT-4 model. Altman encourages staying informed about OpenAI’s updates to learn more about the changes coming to ChatGPT. The full interview with Altman can be found on the All-In podcast.

Source link