Two former researchers at OpenAI, who left the organization due to concerns about safety, have voiced their disappointment—but not shock—over OpenAI’s decision to reject California’s legislative effort, SB 1047, to mitigate AI risks. Daniel Kokotajlo and William Saunders, the researchers in question, had earlier cautioned that OpenAI was engaging in a “reckless” competition for dominance.
In an open letter shared with Politico, they highlighted that Sam Altman, their ex-boss and OpenAI’s CEO, had been an advocate for AI regulation. “He has repeatedly called for AI regulation,” they stated. “Yet, when genuine regulation is introduced, he stands in opposition.” They hope that with proper regulatory measures, OpenAI might still align with its foundational aim of developing Artificial General Intelligence (AGI) in a secure manner.
An OpenAI representative countered the claims made by the former staff, labeling them as a misinterpretation of the company’s stance on SB 1047, in a response to TechCrunch. The spokesperson outlined OpenAI’s support for AI legislation at the national level, endorsed by the startup, emphasizing that “frontier AI safety measures should be governed federally due to their national security and competitiveness implications.”
Meanwhile, Anthropic, a competitor of OpenAI, has signaled its backing for the bill, albeit with some reservations, suggesting modifications. Following the incorporation of these suggestions, Anthropic’s CEO, Dario Amodei, penned a letter to Governor Gavin Newsom, acknowledging that the bill’s “advantages probably outweigh its drawbacks,” without fully committing to it.