He resigned from the ChatGPT company and warned about an AI that is smarter than humans

He resigned from the ChatGPT company and warned about an AI that is smarter than humans
He resigned from the ChatGPT company and warned about an AI that is smarter than humans

A former employee of OpenAIwho signed a letter this week along with other employees of the creative company of ChatGPTto denounce the opacity about the possible risks of artificial intelligence (AI), points out the danger of a race in the sector to create a super AI either general artificial intelligence (IAG) that is as intelligent or more intelligent than the human being.

Carroll Wainwright He resigned last week from his position as part of the practical alignment and super alignment team, which ensures that OpenAI’s most powerful models are safe and in line with human values.

An AI smarter than humans

When the ChatGPT chatbot was launched – and with a success that surprised even the company itself – Wainwright began working on a super-alignment team to investigate, as OpenAI obtained increasingly intelligent models, the technical interventions that they can do to maintain control over them.

Unlike generative AI, AGI would not only be able to reproduce human actions – such as writing or drawing, which generative AI already does – but would also understand the complexity and context of their actions.

This technology does not exist yet. Some experts, such as Elon Muskpredict that it will be created in about two years, while others like Robin LiCEO of Baid, one of the largest technology companies in Chinasays it will arrive in a decade.

Wainwright believes it could be seen in about five years: “Surely it will happen in five years? No, but I think it’s possible. It may also take much longer. But if there is something that can potentially change the world, we should take it very, very seriously.”he clarifies.

However, the former worker points out that the reason he resigned and then signed the letter is not because he found something “scary”, since for now OpenAI is only investigating the possibility of creating technology.

The OpenAI shift

The main trigger for his resignation was the change in vision of the company, which began as a non-profit research laboratory. “with the mission that this technology truly benefits humanity”after the enormous success ChatGPT in 2022.

“I believe that the motivations that drive OpenAI in its everyday actions are almost entirely profit incentives”says Wainwright.

The main risks of the IAG

Wainwright highlights three risks of AGI: machines replacing workers – especially skilled jobs – the social and mental impact, as people could have a personal AI friend/assistant, and finally, control of technology.

“The risk of long-term alignment occurs if you get a model that is smarter than humans. How can you be sure that that model is really doing what the human wants it to do or that the machine does not have a goal of its own? And if it has its own objective it is worrying,” Explain.

Wainwright thinks that large AI companies will respect the regulations, the problem is that at the moment they have not been implemented, which is why employees in this sector ask in their letter to create a system in which workers can alert to an independent body of the dangers they see in their companies.

According to the former employee, the problem of the AI ​​giants is not the lack of security, but the speed at which they face due to the rivalry of the companies, especially between OpenAI and Google.

“Just because nobody wants a disaster to happen doesn’t mean they’re going to take the time to make sure it doesn’t happen. Since there is the incentive to compete and beat everyone else”, he highlights.

Source: with information from EFE.

I like it:

I like Charging…

 
For Latest Updates Follow us on Google News
 

-