OpenAI co-founder Ilya Sutskever raises $1 billion for his new AI startup Safe Superintelligence (SSI)
Less than four months after leaving OpenAI, former chief scientist and co-creator of ChatGPT-4, Ilya Sutskever, has secured $1 billion for his new AI venture, Safe Superintelligence (SSI).
In a post on X, the company revealed that the funding was backed by high-profile investors including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG—an investment partnership co-run by SSI executive Daniel Gross.
Sutskever, who was also one of the OpenAI co-founders, shared in May that SSI’s mission is clear: “We will pursue safe superintelligence with a singular focus on one goal and one product.”
SSI is building a straight shot to safe superintelligence.
We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.
We’re hiring: https://t.co/DmFWnrc1Kr
— SSI Inc. (@ssi) September 4, 2024
Sutskever co-founded SSI in June with Daniel Gross, a former Y Combinator partner, and ex-OpenAI engineer Daniel Levy. The company is based in Palo Alto, California, with an additional office in Tel Aviv, Israel. SSI aims to develop superintelligent AI that serves humanity, a vision Sutskever believes could be realized within the next decade.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI,” the company said on its website.
At OpenAI, Sutskever was a key figure, co-leading the Superalignment team alongside Jan Leike, who left in May to join rival AI firm Anthropic. Following their departures, OpenAI disbanded the Superalignment team, just a year after its formation. Some members were reassigned within the company, as reported by CNBC.
Leike commented on X, stating that OpenAI’s emphasis on safety had been overshadowed by the pursuit of new products.
SSI, led by Sutskever, Gross, and Levy, operates with a clear mission: “Our name reflects our purpose,” the company stated on X. “We are committed to our singular goal, free from distractions of management or product cycles. Our approach ensures safety and progress are shielded from immediate commercial pressures.”
During his tenure at OpenAI, Sutskever co-led the Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic. The Superalignment team, which focused on guiding and controlling AI systems, was dissolved shortly after Sutskever and Leike’s departures. At SSI, Sutskever will continue to prioritize AI safety.
Sutskever’s departure from OpenAI came after a tumultuous period that included the controversial ousting of OpenAI co-founder and CEO Sam Altman. Sutskever later expressed deep regret over his involvement in the board’s actions, saying on X, “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”