Ilya Sutskever, the ex-chief Scientist at OpenAI, has raised $1 billion for his new AI startup, Superintelligence Safety Initiative (SSI). Although not verified, the company is already rumored to be valued at $5 billion following the close of the $1 billion funding round. The startup aims to channel the funds to provide solutions around AI safety– arguably the most pressing issue facing the field of AI.
SSI’s mission is to ensure that artificial superintelligence will not become a threat as it develops. Many others share this worry, believing that such advanced superintelligence would ultimately evolve and work against humankind if allowed unchecked freedom.
One of the most critical parts of SSI’s mission is handling risks with AI, which has occupied much of Sutskever’s work throughout his career. At OpenAI, he worked as a researcher on the Superalignment project, researching how to align AI systems with human values.
Notable SSI investors like Andreessen Horowitz, Sequoia Capital, and DST Global are betting on the startup’s potential despite the broader downturn in AI funding. The company’s $1 billion funding round highlights a continued belief in Sutskever’s vision and talent, even though AI safety research might take years to produce a marketable product.
SSI also plans to collaborate with chip and cloud companies to secure the computing power necessary for AI research and development. While it hasn’t disclosed its technology partners yet, the startup will likely maintain its long-term goal of exploring new ways to scale AI, a concept Sutskever championed during his time at OpenAI. However, he suggests that SSI will approach scaling differently, aiming for more innovative techniques rather than simply increasing computing power.
SSI’s focus on AI safety positions it at the center of an ongoing debate in the AI community about the potential risks posed by increasingly powerful AI systems. Sutskever’s experience and the support from top venture capitalists suggest that SSI could become a significant player in the AI safety space, particularly as governments and organizations become more concerned about the risks of advanced AI.?