A former OpenAI executive said the company behind ChatGPT prioritizes “shiny products” over security, revealing he quit after a disagreement over key goals reached “breaking point.”
Jan Leike was one of the key security researchers at OpenAI as co-leader of Superalignment, ensuring that powerful artificial intelligence systems align with human values and goals. His intervention comes ahead of a global artificial intelligence summit next week in Seoul where politicians, experts and technology executives will discuss oversight of the technology.
Leike resigned days after launching the San Francisco-based company’s latest AI model, GPT-4o. His departure means that two senior security experts at OpenAI have left the company this week, following the resignation of Ilya Sutskever, co-founder of OpenAI and co-head of Superalignment.
Leike explained the reasons for his departure in a thread posted on X on Friday, in which he said safety culture had become a lower priority.
“In recent years, safety culture and processes have taken a back seat to shiny products,” he wrote.
OpenAI was founded with the goal of ensuring that artificial general intelligence, which it describes as “AI systems that are generally smarter than humans,” benefits all of humanity. In his
Leike said OpenAI, which also developed the Dall-E image generator and the Sora video generator, should invest more resources in issues such as security, social impact, confidentiality and security for its next generation of models.
“These problems are quite difficult to solve, and I worry that we are not on track to get there,” he wrote, adding that it is becoming “increasingly difficult” for his team to conduct their research.
“Building machines more intelligent than humans is an inherently dangerous undertaking. “OpenAI assumes a tremendous responsibility on behalf of all humanity,” Leike wrote, adding that OpenAI must “become an AGI company that puts security first.”
OpenAI CEO Sam Altman responded to Leike’s thread with a post on X thanking his former colleague for his contribution to the company’s security culture.
“He’s right, we still have a lot to do; We are determined to do this,” he wrote.
Sutskever, who was also OpenAI’s chief scientist, wrote in his X post announces his departure He said he is confident that under his current leadership, OpenAI will “build AGI that is both safe and useful.” Sutskever initially supported Altman’s removal as OpenAI boss last November before pushing for his reinstatement after days of internal unrest at the company.
Leike’s warning came as a panel of international AI experts released an initial report on AI safety, saying there was disagreement about the likelihood of powerful AI systems escaping human control. But it warned that regulators could be left behind by rapid advances in technology, warning of the “potential mismatch between the pace of technological progress and the pace of a regulatory response.”