Because AI is loosely regulated, accountability rests with those within the company, the employees said in the letter, calling on companies to lift non-disclosure agreements and provide protections for employees to voice concerns anonymously.
The move comes as OpenAI faces mass staff exodus, with many critics viewing the high-profile departures, including OpenAI co-founder Ilya Sutskever and senior researcher Jan Reicke, as a rebuke to the company’s leaders for pursuing profits at the expense of making OpenAI’s technology more secure.
Daniel Kokotajiro, a former OpenAI employee, said he left the company because it downplayed the risks of artificial intelligence.
Get caught up in
Summarised stories to keep you up to date
“I have lost hope that they will act responsibly, especially as they pursue artificial general intelligence,” he said in a statement, referring to the hotly debated term for computers that can match the capabilities of the human brain.
“They and others have embraced a ‘move fast and break things’ approach, which is the opposite of what’s needed for such a powerful yet poorly understood technology.”
OpenAI spokeswoman Liz Bourgeois said the company agrees that “given the importance of this technology, a rigorous debate is essential.” Representatives for Anthropik and Google did not immediately respond to requests for comment.
The employees said that without government oversight, AI workers are “one of the few” who can hold companies accountable. They noted that they are shackled by “extensive non-disclosure agreements,” that normal whistleblower protections are “inadequate” because they focus on illegal activity, and that the risks they warn about are not yet regulated.
The letter calls on AI companies to adhere to four principles to increase transparency and whistleblower protections, including a commitment not to enter into or enforce contracts that prohibit criticism of risks, to establish anonymous processes for current and former employees to raise concerns, to support a culture of criticism, and to not retaliate against current or former employees who share confidential information to raise concerns “after other processes have failed.”
The Washington Post reported in December that OpenAI executives were concerned about retaliation from CEO Sam Altman. The warning came ahead of his temporary dismissal. In a recent podcast interview, former OpenAI board member Helen Toner said that one of the reasons the nonprofit fired Altman as CEO late last year was because of his lack of forthright communication about safety.
“He provided inaccurate information about the few formal safeguards the company actually had in place, which meant it was essentially impossible for the board to know how well those safeguards were working,” she said on the “TED AI Show” in May.
The letter was supported by AI authorities including Yoshua Bengio, considered the “godfather” of AI, Geoffrey Hinton, and renowned computer scientist Stuart Russell.