1. OpenAI’s Superalignment team, tasked with governing and steering superintelligent AI, had promised resources denied to them, leading to resignations including co-lead Jan Leike and co-founder Ilya Sutskever.
2. Superalignment team focused on safety research, but as product launches took precedence, they fought for more upfront investment, citing concerns about safety culture and processes.
3. Post-resignations, John Schulman will lead the work previously done by the Superalignment team, but as a loosely associated group embedded in different company divisions, raising concerns about the focus on safety in AI development.
OpenAI’s Superalignment team was promised 20% of the company’s compute resources to govern and steer “superintelligent” AI systems, but their requests for resources were often denied, leading to team members resigning, including co-lead Jan Leike. Leike disagreed with OpenAI’s leadership over core priorities and the lack of focus on preparing for future generations of models and safety measures.
The Superalignment team published safety research and provided grants to outside researchers aiming to solve core technical challenges of controlling superintelligent AI. The team faced challenges as product launches took up more of the leadership’s bandwidth, hindering investments that they believed were crucial to the company’s mission.
OpenAI CEO Sam Altman faced issues with the board and was nearly fired late last year. With the departures of Leike and co-founder Ilya Sutskever, John Schulman has taken over the work of the Superalignment team, which will now be a loosely associated group of researchers embedded throughout the company. This restructuring raises concerns about the focus on safety in AI development, with the potential for safety measures to be deprioritized.
OpenAI has stated its commitment to addressing concerns raised by the departures and emphasized the need for a tight feedback loop, rigorous testing, world-class security, and a balance between safety and capabilities in AI development. Despite the changes, the fear remains that OpenAI’s AI development may lose its safety focus.