News Eseuro English

The technology of AI that “understands” the emotions of animals could revolutionize the future of their well -being

If emotions exist or not in animals other than the being is still an issue of debate. Several studies based on behavior, neuroscience and physiology suggest that animals can also feel fear to avoid danger and pleasure in deepening social ties.

In this context, Danish researchers have managed to build an automatic learning capable of distinguishing between positive and negative emotions in seven species of herbivores, including bovines, pigs and wild boars. The model is able to classify emotions, with high precision of 89.49%, analyzing the acoustic patterns of their screams.

The positive emotions here refer to a in which the animal feels satisfied after eating or feels relaxed in a safe environment. Negative emotions, on the other hand, refer to the fact that the animal feels isolated from their peers, anxious or in danger.

This research shows that artificial intelligence can be used to decipher the emotions of several species of animals from their vocalization patterns. “Being able to monitor the emotions of animals in real could revolutionize animal , livestock management and conservation efforts,” explains Elodie F. Briefer, associate professor at the University of Copenhagen and biology specialist.

Voice patterns common to all species

Animal emotions are classified into two axes: excitation, which indicates the degree of physical activation, and emotional valence, which indicates whether emotion is positive or negative. The excitement can be easily evaluated from heart and movement, but measuring emotional valence is not so simple. Animal sounds have been considered a useful indicator of emotion, although so far they had not fully understood.

Briefer and his team recorded a total of 3,181 calls from seven herbivorous animals (cows, sheep, horses, horses of Przewalski, pigs, wild boars and goats) and used them to determine seventeen acoustic characteristics, including the duration, the fundamental frequency (the lowest frequency component of the signal, which determines the sound tone) that the sound volume changes over time).

In the study, these characteristics were standardized (aligning data from different scales and units to the same scale for comparison) and then turned to two dimensions using UMAP, an analytical method that compresses high dimension data in lower dimensions for visualization, which makes possible the visual of the data.

Next, the researchers used the ‘K-Means’ method (an algorithm that divides the data into K and finds the central point within each in which the data is closer) and a simple Bayesian classifier (a method to classify data based on the theory of probability, assuming that the characteristics are of each other) to classify the data of the songs in positive and negative emotions.

-

Related news :