Yann LeCun, director of AI at Meta and precursor of Deep Learning

Yann LeCun, director of AI at Meta and precursor of Deep Learning
Yann LeCun, director of AI at Meta and precursor of Deep Learning

Artificial intelligence is one of the most exciting topics of the moment, and recently Yann LeCun, AI lead at Meta, has made it clear that systems like ChatGPT will not be able to surpass human intelligence. Known for being trained on enormous amounts of text, LeCun believes that this will not develop advanced logic and understanding of the world.

To do this, LeCun proposes a different approach than large language models (LLM). The main bet of this researcher is in the “world modeling”. But what is this new paradigm for AI about?

First of all, it is a good idea to keep in mind who is Yann LeCun, since this is not just any AI researcher. LeCun is a computer scientist who in 2018 received the Turing Award, the highest award in computer science, for his contributions to artificial intelligence.

Furthermore, LeCun is one of the “fathers” of the Deep Learning as we know it today and was responsible for modeling convolutional neural networks (CNN). CNNs are used in countless applications today, among which autonomous vehicles or the detection of extragalactic objects stand out. In fact, he recently argued with Elon Musk regarding CNN, since the controversial millionaire believes that they are already useless (big mistake).

Well, LeCun is an authority in the world of AI and both the development team at Meta and Mark Zuckerberg know it. That is why he currently leads the company’s efforts to create a general artificial intelligence (AGI). AGI are systems that go beyond what has been shown so far by LLMs such as ChatGPT. This is because they seek to shape an agent with the ability to understand, learn and apply knowledge in a human-like manner.

The problem with LLMs like GPT, Gemini or Alphabet is that they are conditioned to the information (usually just text) with which they are trained. In that way, their answers are predetermined by what they have read previously, whether it is correct or not, since we have seen that LLMs get it wrong about all kinds of things. We can mention the case of pizza glue that recommended using Google’s AI.

The “shaping of the world”

To overcome this conditioning, LeCun proposes the “world modeling” or world modeling. The key behind world modeling is the ability sought to develop a deep and dynamic understanding of the physical and social environment where the intelligent agent is located. This approach seeks to imitate the intelligence of humans and animals.

Among the many differences that world modeling has with LLM is that not only would questions be answered based on already existing data (text on the Internet or books), but reasoning would be developed autonomously and effectively. That is, more or less, what is sought for AGI.

The fact is that they want create machines that learn from experience, like children. In addition, they would have persistent memory to remember relevant information, hierarchical planning, and understanding the physics of the world. However, LeCun and Meta are not the only contenders for AGI modeling.

As is natural to think, the first of the contenders are the LLMs. OpenAI and Google present continuous improvements in their text generation systems, even showing a certain degree of reasoning. DeepMind, for its part, relies more on reinforcement learning, using techniques such as simulated environments and even video games, so that agents learn to interact with their environment.

The modeling of the world on which Meta is betting is one more option. Although LeCun thinks that It could take us at least a decade to create something similar to an AGI, technology is constantly changing. Maybe in a couple of years we will find the nail to speed up the process, or not. In any case, 10 years doesn’t sound that far away from the “summer of AI.”

 
For Latest Updates Follow us on Google News
 

-

PREV VALORANT Battle Pass Episode 9 Act 1: New Skins and Rewards
NEXT YouTube launches new feature for Android TV to improve sound