create a superintelligence with “nuclear” security

create a superintelligence with “nuclear” security
create a superintelligence with “nuclear” security
  • The co-founder of OpenAI, who was also its chief scientist, was scalded after the dismissal and return of Sam Altman as CEO

  • Safe Superintelligence Inc (SSI) has just been created with the challenge of developing safe superintelligence

June 20, 2024, 09:18

Updated June 20, 2024, 09:24

Last November was the most tumultuous in OpenAI’s history. The company suddenly fired its CEO, Sam Altman, but events unfolded and Altman ended up returning to his position a few days later. There were consequences, and one of them was the departure of Ilya Sustkever, the company’s co-founder and chief scientist. A month after leaving, Sutskever has just announced the creation of a new AI company.

Safe Superintelligence Inc (SSI). This is the name of the new company founded by Ilya Sutskever, Daniel Gross (who was a partner at the Y Combinator incubator and who worked in Apple’s AI division) and Daniel Levy, another former OpenAI engineer. The company will have headquarters in Palo Alto (California, USA) and in Tel Aviv (Israel). Both Sutskever and Gross grew up in Israel.

The challenge. The company’s official announcement is surprising first of all for its format, an absolutely minimalist website that almost seems from another era. In it, the founders explain how their mission is exactly what the company’s name indicates: developing secure superintelligence. The question is, safe, in what sense?

Nuclear security. In an interview on Bloomberg Sutskever explained that “by secure we mean secure as when we talk about nuclear security, as opposed to secure when we talk about “trust and security””, which is the speech that Sam Altman has more or less maintained in OpenAI in the last years.

Sutskever always defended cautious AI developments. The engineer was one of those who promoted the dismissal of Sam Altman, and all the rumors indicated that his ways of seeing the development of OpenAI’s AI models were different: Altman always defended the publication of models as soon as possible, while Sutskever He seemed much more cautious. After the crisis, however, he kept a very low profile, and barely appeared during the GPT-4o presentation event.

Investment almost assured despite an almost unattainable objective. The pedigree of the founders suggests that it will not take long for them to raise a large amount of money in investment rounds. However, that does not mean that the company will be successful: creating a superintelligence – as capable as that of a human being, or more – seems an elusive task, although that search for an AGI has already become a focus of both OpenAI and of Meta, among others.

How to guarantee security? Even assuming that it is possible to develop AGI or superintelligence, what does not seem so clear is how to make it secure. Sutskever assures that he has been considering this same problem for years and that he already has ideas to deal with it. For him “at the most basic level, safe superintelligence should have the property of not harming humanity on a large scale.” The idea is reasonable. The difficult thing is to make it come true.

Image | OpenAI

In Xataka | The rise of artificial intelligence is reviving an old fear in American companies: Chinese espionage

 
For Latest Updates Follow us on Google News
 

-

PREV The market expects by the end of the year that the official dollar will be 20% more expensive than what the Government established
NEXT Pulse between Ripple bulls and bears: Who will win?