Sam Altman Microsoft Build
Helen Toner and Tasha McCauley left OpenAI when Sam Altman returned as CEO.
  • Two former OpenAI board members warned about CEO Sam Altman and AI safety in an op-ed in The Economist.
  • Two current board members wrote a response pushing back on the concerns.
  • The current members pointed to OpenAI's new safety committee and Altman's support for regulation.

This has been the week of dueling op-eds from former and current OpenAI board members.

Current OpenAI board members Bret Taylor and Larry Summers issued a response to AI safety concerns on Thursday, stating that "the board is taking commensurate steps to ensure safety and security."

The response comes a few days after The Economist published an op-ed by former OpenAI board members Helen Toner and Tasha McCauley, who criticized CEO Sam Altman and OpenAI's safety practices while calling for a need to regulate AI. The title of the piece argued that "AI firms mustn't govern themselves."

Taylor and Summers, the two current members, pushed back against the former directors' claims in a response that was also published in The Economist. They defended Altman and discussed OpenAI's stance on safety, including the company's formation of a new safety committee and a set of voluntary commitments OpenAI made to the White House to reinforce safety and security.

The pair said that they had previously chaired a special committee as part of the newly established board and set up an external review by law firm WilmerHale of the events leading to Altman's ousting. The process entailed reviewing 30,000 documents, along with dozens of interviews with OpenAI's previous board, executives, and other relevant witnesses, they added.

Taylor and Summers reiterated that WilmerHale concluded that Altman's ousting "did not arise out of concerns regarding product safety or security" or "the pace of development."

They also took issue with the op-ed's characterization that Altman had created "a toxic culture of lying" and engaging in psychologically abusive behavior. In the last six months, the two current board members said they had found Altman "highly forthcoming on all relevant issues and consistently collegial with his management team."

In an interview published the same day as her op-ed, Toner explained why the former board previously decided to remove Altman, saying that he lied to them "multiple times" and withheld information. She also said that the old OpenAI board found out about ChatGPT's release on Twitter.

"Although perhaps difficult to remember now, Openai released Chatgpt in November 2022 as a research project to learn more about how useful its models are in conversational settings," Taylor and Summers wrote in response. "It was built on gpt-3.5, an existing ai model which had already been available for more than eight months at the time."

Toner did not respond to a request for comment ahead of publication. OpenAI did not respond to a request for comment.

OpenAI supports "effective regulation of artificial general intelligence" and Altman, who was reinstated only days after his ousting, has "implored" lawmakers to regulate AI, the two current board directors added.

Altman has been advocating for some form of AI regulation since 2015, more recently saying he favors the formation of an international regulating agency — but he's also said he's "super nervous about regulatory overreach."

At the World Government Summit in February, Altman suggested a "regulatory sandbox" where people could experiment with the technology and write regulations around what "went really wrong" and what went "really right."

OpenAI has seen multiple high-profile departures in recent weeks, including machine learning researcher Jan Leike, chief scientist Ilya Sutskever, and policy researcher Gretchen Krueger. Both Leike and Krueger vocalized safety concerns following their departures.

Leike and Sutskever, who is also a cofounder of OpenAI, were the co-leads of the company's superalignment team. The team was tasked with researching the long-term risks of AI, including the chance it could go "rogue."

OpenAI dissolved the superalignment safety team before later announcing the formation of a new safety committee.

Read the original article on Business Insider