1. Members of OpenAI’s Superalignment team, responsible for developing ways to govern superintelligent AI, resigned due to limited access to promised compute resources and disagreements over company priorities.
2. OpenAI co-founders Jan Leike and Ilya Sutskever, who led the Superalignment team, resigned from the company this week, citing concerns about a lack of focus on critical safety, security, and alignment issues in AI development.
3. Following the resignations, the Superalignment team will no longer exist as a dedicated group, with its work being integrated into other divisions within OpenAI, raising concerns about the future safety focus of the company’s AI development.
OpenAI’s Superalignment team, responsible for governing and steering “superintelligent” AI systems, was promised 20% of the company’s compute resources. However, requests for compute were often denied, leading to resignations from team members, including co-lead Jan Leike and OpenAI co-founder Ilya Sutskever. Leike cited disagreements with leadership over company priorities, including focusing more on next-generation models, security, safety, and societal impact.
The Superalignment team was formed to solve technical challenges related to controlling superintelligent AI within four years. They published safety research and funneled grants to researchers, but faced challenges as the company’s product launches took priority. Safety processes were sometimes neglected in favor of developing new products, prompting concerns from team members about the company’s mission.
Sutskever’s conflicts with OpenAI CEO Sam Altman over transparency led to a power struggle, with Altman being temporarily removed from his position. Sutskever played a crucial role in the Superalignment team, advocating for their work to company decision-makers. Following the departures of Leike and Sutskever, John Schulman took over the team’s work in a more distributed manner, potentially impacting the focus on safety in AI development at OpenAI.