OpenAI co-founder Ilya Sutskever launches new AI startup, Safe Superintelligence (SSI)
Just a month after his departure from OpenAI, former chief scientist and ChatGPT-4 creator Ilya Sutskever announced on Wednesday the launch of his new AI startup, Safe Superintelligence (SSI), focused exclusively on developing safe and powerful superintelligent AI.
SSI is co-founded by Sutskever, former Y Combinator partner Daniel Gross, and ex-OpenAI engineer Daniel Levy. The company is headquartered in Palo Alto, California, with an additional office in Tel Aviv, Israel. According to Sutskever, SSI’s primary goal is to create beneficial superintelligent AI, which he believes could be achieved “within a decade.”
Unlike OpenAI’s recent pivot towards commercial products, SSI will remain a pure research organization, shielded from short-term commercial pressures. While funding details have not been disclosed, Gross confidently stated, “Raising capital is not going to be one of our challenges.”
Sutskever took to X (formerly Twitter) to share the news, stating, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
— SSI Inc. (@ssi) June 19, 2024
Hints about this new venture surfaced in May when Sutskever mentioned on X that he was working on a “very personally meaningful” project and would share more details in due time.
During his tenure at OpenAI, Sutskever co-led the Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic. The Superalignment team, which focused on guiding and controlling AI systems, was dissolved shortly after Sutskever and Leike’s departures. At SSI, Sutskever will continue to prioritize AI safety.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” the company’s official account posted on X. “Our singular focus means no distraction by management overhead or product cycles, and our business model ensures safety, security, and progress are insulated from short-term commercial pressures.”
Sutskever’s departure from OpenAI came after a tumultuous period that included the controversial ousting of OpenAI co-founder and CEO Sam Altman. Sutskever later expressed deep regret over his involvement in the board’s actions, saying on X, “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
Altman’s sudden ouster, followed by his swift reinstatement, led Sutskever to publicly apologize for his role in the incident. In a post on X dated November 20, he wrote, “I deeply regret my participation in the board’s actions. I love everything we’ve built together and will do everything I can to reunite the company.”
You can read the entirety of Sustkever’s post below.
“Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence Inc.
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.Ilya Sutskever, Daniel Gross, Daniel Levy June 19, 2024″