JOIN queries, OpenAI, ChatGPT

OpenAI co-founder Ilya Sutskever, who left the company in a spat over the leadership of Sam Altman and his stance on safe products, has landed $1 billion in funding for his latest venture, Safe Superintelligence.

The AI startup raised the staggering amount led by venture-capital firms Andreessen Horowitz and Sequoia Capital last week, sparking speculation on whether more talent will bleed to it from OpenAI. SSI is now valued at about $5 billion. [OpenAI’s valuation was recently estimated at more than $100 billion and, with another reported $6.5 billion in funding on the way, its valuation would be boosted to $150 billion.]

Safe Superintelligence, which currently has no products, said it will produce a “safe” AI model without offering a specific timeline. It has “identified a new mountain to climb that’s a bit different from what I was working on previously,” Sutskever told the New York Times this month. He added, the startup doesn’t intend to “go down the same path faster.”

Its path will inevitably lead to competition with established AI vendors OpenAI, Anthropic, Alphabet Inc.’s Google, Meta Platforms Inc. and Elon Musk’s X.

Safety concerns led to Sutskever’s departure from OpenAI in May, where he was leader of OpenAI’s “superalignment” entrusted with ensuring the AIs it creates were “aligned” with humanity’s needs, rather than posing an existential threat. His misgivings were that Altman’s ambitions for smarter AI technology, and pursuit of profits, came at the potential cost of safe technology. That culminated in an executive shakeup that included Sutskever leaving.

Previously, Sutskever was a key figure in the palace drama that briefly led to the ouster of OpenAI Chief Executive Sam Altman in late 2023.

AWS

Since then, Altman has insisted that OpenAI is actively developing fast, safe products, though the company has been the target of several lawsuits over the impact of its technology.

The AI boom has both tantalized the tech industry, consumers and developers — as well as terrified them over security issues such as election interference, misinformation and cybercrime. Recent research from MIT listed hundreds of potential AI threats.

Indeed, cyberthreats remain a top concern. The Cybersecurity and Infrastructure Security Agency has called this year’s presidential election “the most complex threat landscape” and major cyberattacks have already occurred.

“The software supply chain will undoubtedly become a key target for malicious actors in this historic election year,” Javed Hasan, CEO and co-founder of Lineaje, said in an email. “High-profile catastrophes like the attack on SolarWinds, the Log4J vulnerability, and the 3CX breach have made clear that our software is highly vulnerable — and the technology we use to conduct elections is far from immune to these concerns.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY