The silhouette of a person holding a laptop next to the OpenAI logo.
The Superalignment team at OpenAI is now without both of its chiefs, machine learning researcher Jan Leike and cofounder Ilya Sutskever.
  • Machine learning researcher Jan Leike and scientist Ilya Sutskever just resigned from OpenAI.
  • Their team's job, essentially, was to make sure humans are safe from OpenAI's superintelligence.
  • The team was racing to make sure AI remains aligned with mankind's interests.

It's too soon to say what major departures at OpenAI mean for the company, or if Sam Altman plans to replace the people in these roles with new staffers. But this isn't a great day for AI doomsayers.

OpenAI cofounder Ilya Sutskever, the company's chief scientist, said on X on Tuesday that he "made the decision to leave OpenAI." Calling it "an honor and a privilege to have worked together" with Altman and crew, Sutskever bowed out from the role, saying he's "confident that OpenAI will build AGI that is both safe and beneficial."

The more abrupt departure came from Jan Leike, another top OpenAI executive. On Tuesday night, Leike posted a blunt, two-word confirmation of his exit from OpenAI on X: "I resigned."

Leike and Sutskever led the superalignment team at OpenAI, which has seen a number of other departures in recent months.

"We need scientific and technical breakthroughs to steer and control AI systems much smarter than us," OpenAI said of superalignment in a July 5, 2023 post on its website. "To solve this problem within four years, we're starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we've secured to date to this effort."

So it follows that part of the duo's work was to, in OpenAI's words, "ensure AI systems much smarter than humans follow human intent."

And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post.

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs."

Leike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created.

"It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it," Leike said on an August 2023 episode of the "80,000 Hours" podcast.

On his Substack, Leike has outlined how the alignment problem —when machines don't act in accordance with humans' intentions — can be solved and what's needed to solve it.

"Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve. But maybe not," Leike wrote in March 2022. "By trying to solve the whole problem, we might be trying to get something that isn't within our reach. Instead, we can pursue a less ambitious goal that can still ultimately lead us to a solution, a minimal viable product (MVP) for alignment: Building a sufficiently aligned AI system that accelerates alignment research to align more capable AI systems."

Sutskever, Leike, and representatives for OpenAI did not immediately respond to requests for comment from Business Insider, sent outside regular business hours.

Read the original article on Business Insider