OpenAI Disbands Research Team Studying Potential Risks of ‘Rogue’ AI

– OpenAI launched GPT-4o, its most human-like AI yet
– The company dissolved its Superalignment team after the resignations of Ilya Sutskever and Jan Leike
– The team was dedicated to mitigating AI risks and faced challenges in aligning priorities with OpenAI’s focus on releasing “shiny products”

In a surprising move, OpenAI launched its most human-like AI, GPT-4o, at the same time it dissolved its Superalignment team. The team was initially created in July 2023 to address AI risks, but disbanded shortly after leaders Ilya Sutskever and Jan Leike resigned. Sutskever expressed confidence in OpenAI’s ability to build safe and beneficial AGI under current leadership, hinting at a new project that he finds personally meaningful.

Sutskever, a former OpenAI chief scientist, had also been involved in the removal of CEO Sam Altman, whose reinstatement raised questions about Sutskever’s future at the company. Leike, another key executive, also announced his departure following disagreements on core priorities and struggles in obtaining compute for research within the company. The Superalignment team’s goal was to develop an automated alignment researcher using a significant share of OpenAI’s computing power, emphasizing the importance of safety in AI development.

The team worked on addressing various risks related to AI, such as misuse, economic disruption, disinformation, bias, addiction, and overreliance. While the company acknowledged the ambitious nature of their objectives, it was clear that safety considerations were taking a backseat to product development. Some remaining team members have been reassigned to other OpenAI teams, and the company did not respond to requests for comment on these recent developments.

Source link