While Lula calls for AI from the global south, Brazil registers a worrying increase in deepfakes

While Lula calls for AI from the global south, Brazil registers a worrying increase in deepfakes
While Lula calls for AI from the global south, Brazil registers a worrying increase in deepfakes

The president of Brazil Luiz Inácio Lula da Silva (EFE/Andre Borges)

In his speech on Thursday before the audience of the International Labor Organization (ILO) in Geneva, Switzerland, the Brazilian president Lula accused rich countries of creating artificial intelligence (AI) to “try to manipulate the rest of humanity”. For this reason, Lula asked the ILO and the UN to create a project for the “Artificial Intelligence from the Global South”, without giving details of what is meant by this differentiation. The Brazilian president’s statements came shortly before his participation in the G7 meeting in Italy, where the topic of artificial intelligence was one of those that focused the debate among the seven member countries of the group. Even the Pope Francisco intervened on the subject, warning of the risks. “Artificial intelligence could bring about greater injustice between advanced nations and developing nations, between dominant social classes and oppressed social classes, thus endangering the possibility of a culture of encounter in favor of a culture of throwaway” said the Pope.

In Brazil, AI is more widespread and technologically more advanced than is believed. The 2023 report on Artificial Intelligence of the District, a leading platform for emerging technologies in Latin America, revealed that Brazil stands out for having the highest concentration of companies and volume of investments in AI, representing 73.65% of the total startups, positioning itself as regional leader in development and innovation in Latin America. Brazilian AI startups received $2 billion in investments in 2022. Even telecommunications electronics giants such as Motorola are betting on the Latin American giant, considered one of the most promising poles in the world in the sector. The American company has invested 25 million reais, 4.65 million dollars, in the research of the Federal University of Amazonas (UFAM), which also collaborates in the project with the universities of Acre, Campinas and Pernambuco. Few know that Motorola does all the malware detection, HDR (High dynamic range) image quality in Brazil with artificial intelligence techniques, image processing, Android system security and the intelligence center. artificial.

However, the other side of the coin also shows a Brazil where AI is used fraudulently. According to data from Sumsub, a technology company that offers platforms to verify fraud, The Latin American giant is among the 10 countries in the world with the highest number of deepfakes detected in the first quarter of 2024. From 2023 to 2024, they grew by 840%. Deepfakes, that is, the adulteration of photos and videos through AI, have become a global threat because their total manipulation capacity is at the origin of thousands of frauds and crimes such as bullying and sexual blackmail for money. . As for Brazil, in recent months fraudulent advertisements have appeared in Meta using the voice and face of a well-known Bolsonaro deputy, Nikolas Ferreira, promoting false offers to buy Starlink devices or even Rolex watches. Through the deception of these deepfakes, the unsuspecting reader is directed to fake pages that imitate the official ones and loses thousands of reais. Deepfakes were also used that manipulated the image of a well-known Brazilian businessman, Luciano Hangfor false fundraising for the victims of the state of Rio Grande Sul, devastated by floods.

Lula with Pope Francis at the G7. The Supreme Pontiff participated in a panel on artificial intelligence (Vatican Media/Handout via REUTERS)

There is also the problem of exposure of minors and racial prejudices. According to a report published on Monday by the non-governmental organization Human Rights Watch, photos of Brazilian children and adolescents are being used to create artificial intelligence tools without their knowledge or consent. The analysis concluded that LAION-5b, a dataset used to train popular artificial intelligence tools and built from information scraped from the Internet, has links to identifiable photos of children. In addition to the photos, the platform also provides the names of some children in the subtitles or in the URL where the photo is stored. In many cases, identities are traceable, with information about when and where the child was at the time of the photo. In total, 170 photos of children from ten Brazilian states were discovered. However, according to the NGO, this is nothing more than a small sample of the problem, which appears to be gigantic. LAION-5b has 5.85 billion images and captions and only 0.0001% of the database was analyzed in the report.

It has also arrived in Brazil, although for now in test mode, WhatsApp’s AI tool that allows you to create figurines with a simple text command. Although the application blocks words like “child,” “cocaine,” and “anorexia,” it allows you to create figurines of people holding firearms, even large-caliber ones like Kalashnikov rifles. Furthermore, as the drawings generated by the AI ​​are simplified to the maximum, giving the effect of representing children, there are many figurines that show black children with weapons in their hands. This is a sensitive issue in Brazil, where many children use guns in favelas.

On social networks like instagram, experts have also detected AI flaws that can fuel racial prejudice. For example, some filters applied to black people can blur their faces and blur frizzy hair. With effects that simulate makeup, such as “Skin smoothing texture” (Skin smoothing texture in English), the face becomes grayish and even blurred. For Silvana Bahia, coordinator of PretaLab, a project that promotes technological production by black and indigenous women, “we use algorithms to try to predict the future, but with data from the past, where black people are stereotyped. How is this future going to be more inclusive if we look at data where stereotypes are repeated?” Artificial intelligence is generated by image reading algorithms, trained from a database containing thousands of images. The more images the program reads, the more it recognizes them. The problem is that there are few black faces in this database. In summary, in Brazil, where according to the 2022 Census 55.5% of the population was defined as black or mixed-race, AI seems to reinforce racial stereotypes, as the newspaper also reported Folha de São Paulo, which using the ChatGPT function asked to generate images of a Brazilian woman. What emerged was a tanned-skinned woman with indigenous jewelry in the middle of a tropical jungle. According to experts, what causes this stereotype is the lack of differentiated data available on the subject. This is a failure not only of OpenAI’s artificial intelligence models. Even Google disabled its Gemini model’s ability to generate images, after the platform offered historically inaccurate depictions such as Asian and black Nazi soldiers.

Brazil’s upcoming municipal elections in October are also an important litmus test for the country when it comes to the potential misuse of AI.. According to the technology company Sumsub, from 2023 to 2024 there has been a 245% increase in the appearance of deepfakes around the world, with multiple growth in some countries where elections have been or will be held in 2024, such as the United States, India, Indonesia, Mexico and South Africa. Political polarization in Brazil could be amplified, from the simplest levels of communication, such as the WhatsApp stickers that already circulate showing drawings of a sullen Bolsonaro and Lula with hearts and flowers, to strategic disinformation that risks being used by Brazilian politicians but also by foreign actors with the intention of using the municipal elections as evidence for the 2026 presidential elections. Precisely for this reason, the Superior Electoral Court (TSE) prohibited the use of deepfakes during the electoral campaign. In April, the representative of the Brazilian Socialist Party (PSB) Tabata Amaral, a candidate for mayor of the city of San Pablo, had published on her social networks a video with a deepfake in which the face of the current mayor Ricardo Nunes replaced that of the actor Ryan Gosling, who plays Ken in the movie “Barbie ”. Amaral later deleted the video. Beyond the TSE ban, there remains the problem of how to identify the mass of deepfakes that are feared to invade social networks during the electoral campaign. These reality-like manipulations could be used to smear candidates, spread fake news, and distort public opinion, with fake videos of inflammatory speeches or fictitious confessions.

Lula da Silva with Giorgia Meloni during the second day of the G7 Summit in Borgo Egnazia, Italy (EFE/ETTORE FERRARI)

But the challenge is global, and that is why not even Brazil can escape the debate on AI regulation. The G7, whose meeting Lula attended at the invitation of the Italian Prime Minister, Giorgia Meloni, is working on the legislative front. At last year’s summit in Japan, the G7 established the so-called ‘Hiroshima Process’ to promote global action for advanced AI systems and an international code of conduct for developers to encourage responsible and standardized practices in this sector. . Among the 11 basic principles of the Hiroshima Process there is also risk management, which encourages the adoption of assessment measures, including internal and external testing, and the mitigation of various types of risks, from chemicals to cybersecurity. As for the European Union (EU), in March of this year the Artificial Intelligence Law was approved. It is a set of rules for the development, marketing, deployment and use of artificial intelligence systems in the EU and has been formulated according to a risk-based approach. It applies to public institutions and companies operating in the European space, but also to external companies if their AI products or services are used in the Old Continent. In case of misuse or non-compliance with Artificial Intelligence systems, very high fines can be imposed, such as 7% of annual global turnover and up to 35 million euros.

Brazil does not yet have its own regulations, but the Ministry of Science, Technology and Innovation created on Monday a working group that will meet until the end of July to guide the proposal of a Brazilian Artificial Intelligence Plan to the National Council of Science and Technology. On the other hand, the vote in the Senate to approve Bill (PL) 2,338/2023, which regulates the development and use of AI in Brazil, has been postponed until next Tuesday. The speaker is the senator Eduardo Gomesfrom the Liberal Party, Bolsonaro’s PL, which presented a replacement text to the one already presented in 2023 by the president of the Senate, Rodrigo Pacheco. Gomes said AI regulation should not be confused with other issues, such as “the fight against fake news and political polarization.”

However, the new text lacks measures against deepfakes, while the lawyer Estela Aranha, a member of the UN High-Level Advisory Council on Artificial Intelligence, advocated that the bill include mechanisms to inhibit what he called “algorithmic discrimination.” “There is ample statistical and scientific evidence that algorithms have biases that lead to discriminatory results, even if intentionally so. In the literature, discrimination is an artifact of the AI ​​technological process. This leads to illegal or unfair discrimination. These biases can reproduce and amplify racial, gender and class prejudices and inequalities,” Aranha stated.

 
For Latest Updates Follow us on Google News
 

-