
OpenAI is working with the federal government on early-access testing for its next major generative AI model, a nod to growing concerns over the safety of its products.
“our team has been working with the [Commerce Department’s] US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. excited for this!” OpenAI Chief Executive Sam Altman said in a post on X late Thursday.
The arrangement, which builds on a similar deal in June with the U.K.’s AI safety body, comes after a New York Times report last month about a security breach that took in April 2023. The breach revealed internal discussions among researchers and other employees, but not the code behind OpenAI’s systems.
The timing of the report, and OpenAI’s failure to disclose it publicly, intensified the heat on Altman and OpenAI’s shoddy record on AI safety at a time when the technology is exponentially growing more powerful. [OpenAI confirmed the breach to employees and its board of directors in April 2023 in an all-hands call.]
The ChatGPT maker had already riled those inside and outside the company when it disbanded a unit responsible for developing controls to prevent superintelligent AI systems from going rogue in May. The controversial decision led to to the resignation of the group’s co-leads, Ilya Sutskever, who has since moved to AI startup Safe Superintelligence Inc., and Jan Leike, safety lead at OpenAI rival Anthropic.
Since then, OpenAI has dropped its restrictive non-disparagement clauses that mute whistleblowing, as well as create a safety commission. Some industry observers have compared OpenAI’s attempt to self-regulate to those of Meta Platforms Inc. under CEO Mark Zuckerberg to appease federal lawmakers and regulators.
“finally, we want current and former employees to be able to raise concerns and feel comfortable doing so. this is crucial for any company, but for us especially and an important part of our safety plan. in may, we voided non-disparagement terms for current and former employees and provisions that gave openai the right (although it was never used) to cancel vested equity. we’ve worked hard to make it right,” Altman wrote on X late Thursday.
In a letter to Altman last week, a handful of senators, including Angus King, I-Maine, and Brian Schatz, D-Hawaii, voiced worries about the company’s policies and its focus on “shiny products” over safety and societal impacts, allowing AI systems to be deployed without adequate safety review and insufficient cybersecurity.
Jason Kwon, OpenAI’s chief strategy officer, this week responded that the company is “dedicated to implementing rigorous safety protocols at every stage of our process.”