when artificial intelligence is inspired by natural selection

Arthur Samuel at the Stanford University Artificial Intelligence Laboratory in 1970.
Stanford InfoLab

In 1959, Arthur Samuel (1901-1990), an IBM worker and graduate of the Massachusetts Technical Institute, wrote an article that changed the way computers are programmed.

The problem that Samuel faced was not of capital importance: it was simply a matter of developing a program capable of playing the game of checkers. He was analyzing a board position in which he took into account various factors. Each of them was given greater or lesser importance by means of a numerical value, and produced the movement with the most probability of winning the game based on that analyzed configuration.

The problem was that it was unknown which factors were most important in determining the best move to make. And this is where Samuel’s great contribution lies.

A very insightful checkers player

Instead of assigning specific values, he assigned them randomly. Of course, in this way the result would be a program with terrible performance in the game. The brilliant idea was to make him play against human opponents and try to modify the values ​​based on the experience in the games.

When the software lost a move, it analyzed it by putting itself in the human’s shoes, comparing the action the human performed in each move with the one the program would have done in its place. Thus, he improved his performance by studying his opponent and learning from the mistakes made.

Samuel’s research was the starting point for the development of artificial intelligence at IBM during the 1990s.
IBM

Samuel hadn’t programmed an algorithm to play checkers, he had designed it to learn how to play checkers. And he was able to do it even beyond the capabilities of his programmer.

It was a pioneering experience of what is now known as machine learning, which is behind such widespread advances as medical diagnosis by artificial intelligence, voice recognition, autonomous vehicles, virtual assistants or the latest language models.

Autonomy to learn

Many of them have been nourished by the improvement of models, also parametric, that we call neural networks. In these new models there is no fixed function as in Samuel’s program, but any function can be approximated, and we go from a few parameters to hundreds of millions of them.

Furthermore, a human is no longer needed for the program to learn from it: today’s systems learn from the myriad of data that is available in all domains of knowledge.

However, the technology behind all these recent models is not too revolutionary. The way to adjust its parameters is a modification of the gradient descent adjustment method, which dates back to the end of the 19th century, applied to neural networks in the 1980s.

The recent success lies in the sophistication of the models and their complexity, in addition to the enormous availability of training data. But scientists have not overlooked another important aspect of machine learning: evolution.

Genetic algorithms

Living beings can, in a simplified way, also be described as parameterized functions, whose parameters are genes. Depending on the values ​​that these genes take, one living being or another is produced, with one or another characteristics.

It is evolution that, in a similar way to Samuel’s program, has been selecting the appropriate values ​​of the genes, to generate living beings with a greater probability of surviving, discarding huge amounts of inappropriate values ​​along the way.

John Henry Holland developed the first developmental learning program in 1975.
Wikimedia Commons, CC BY

Even the neural systems of living beings are the product of a process that has been testing alternative solutions, for millions of years, until finding the most effective combination from an evolutionary point of view.

It is from this idea that John Holland (1929-2015) designed, in 1975, the first evolutionary learning program and developed what he called genetic algorithms.

The idea of ​​an evolutionary algorithm is to express the program through a code, in the image of the genetic code, capable of representing a certain behavior. If the code is modified, the program behaves differently.

If we find a certain encoding, it will be able to show highly effective behavior. As in neural systems, the starting point is a random code that is improved through an artificial version of natural selection.

The programs that perform best will cross-code, generating new programs, in a computational version of sexual reproduction.

Since its appearance, there have been countless advances in this type of algorithms. Likewise, they have been successfully applied in the selection of parameters in problems that are unapproachable in any other way.

The 2006 NASA ST5 spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern.
Wikimedia Commons, CC BY

The future of artificial intelligence could lie in combining these two perspectives, the neural and the evolutionary. Evolutionary systems could be useful in the automatic generation of increasingly complex neural models or even new learning methods.

In the same way that natural selection has been able to produce neural systems capable of generating more and more complex behaviors based on their gradual evolution, evolutionary algorithms could be the key to producing increasingly sophisticated artificial intelligence models. , better adapted to the requirements that are needed and more intelligent.

The field of evolutionary systems and biologically inspired systems has great potential that we are only just getting a glimpse of. It will undoubtedly be the subject of much attention in the years to come.

 
For Latest Updates Follow us on Google News
 

-

PREV Blizzard has finally understood what Diablo 4 players want. The arrival of a new Public Test Realm announced with the contents of Season 5 – Diablo IV
NEXT AMD updates 3D V-Cache Optimizer to prepare for the arrival of the Ryzen 9000