“All eyes on Rafah”: the political meme, AI and how to mess up the network | Technology

“All eyes on Rafah”: the political meme, AI and how to mess up the network | Technology
“All eyes on Rafah”: the political meme, AI and how to mess up the network | Technology

If you had told us a year ago that the image that would go viral against the murder of children in Gaza would be a meme generated by artificial intelligence (AI), it would have seemed dystopian to us. In 24 hours, the template created on Instagram with the slogan “All eyes on Rafah” in the shape of a synthetic refugee camp has been directly shared 33 million times (when this text was published). Including personalities such as Nobel Peace Prize winner Malala Yousafzai, Vice President Yolanda Díaz, soccer player Dembelé, former Finnish Prime Minister Sanna Marin and actress Bella Hadid, and hundreds of thousands more screenshots on other networks. Almost a decade ago, the iconic image of another global tragedy was that of the Syrian boy Aylan Kurdi with his face on the beach. Now, after months of horror, is it going to be an illustration generated in seconds by a machine?

This meme is signed by a young man – he has not seen the questions I sent him by direct message – with the flags of Malaysia and Indonesia on his Instagram profile: @chaa.my_, who had very few followers before the ball, and who moved with his other account: @shahv4012. Looking at his activity, it is obvious that he feels overwhelmed and proud by the success of his illustration, and also that he had made several attempts with some synthetic image creation tool until he managed to make it go viral with a slogan —“All eyes on Rafah ”—which had already been circulating for several days on the networks. He has hit upon the key: the motto, the image, the moment, the template sharing tool (used to share all kinds of frivolous trends with friends).

And above all, human emotion. We are tired of repeating it: our psychology is the lever for a meme to go viral, for misinformation to be shared more than true news, for us to hit the retweet or share button. It doesn’t matter that we have been discussing for months whether deepfakes Whether or not they will be a problem of the future: millions of people have found in a synthetic image the best way to express their indignation against what Benjamin Netanyahu is perpetrating against the Palestinians. The artificial image does not deceive us (this meme is not disinformation per se), but rather it suits us: for what we want to express and, above all, for what we want to show about ourselves.

Coincidentally, the Rafah meme has gone viral just when Mark Zuckerberg has announced that he will feed his artificial intelligence with the content we share on his networks, Instagram and Facebook, unless we indicate otherwise. Big technology companies are running out of information with which to train their artificial intelligence models: they have already read the entire Internet, they have watched the entire YouTube. Now they need photos of our drunks, our children and dogs, to reach the Holy Grail of artificial general intelligence. But if we start sharing machine-generated images, like Rafah’s, on Meta networks, and then Meta trains its models with those fake images, we will have a robotic whiting biting its synthetic tail. As they say in the world: shit goes in, shit comes out.

Excuse the use of vulgar words, but we are experiencing a progressive deterioration of the Internet, as Cory Doctorow defined it (enshittification, in English), one of the most eminent minds to analyze the technological ecosystem. The platforms first satisfy the users, then they let their clients squeeze them until finally it is the platform itself that decides to squeeze them all with undesirable results. It has happened with Amazon, with TikTok and now Google is beginning to be a good example. For 25 years we have been convinced by the efficiency of its search engine, but little by little we have become accustomed to queries returning anything but an interesting link to click on. Now, Google is trying to respond to us through its artificial intelligence, which confuses quality information with jokes and ends up recommending eating stones and pizzas with glue.

A little over a year ago, some of the most relevant actors in the technological universe (and with the most interests) were extremely alarmed in calling for a six-month moratorium on the development of artificial intelligence. The catastrophe was so imminent that everything had to be stopped. 14 months have passed and it is laughable to see what has happened since then: the machines have not exponentially improved their capabilities, they continue to say the same nonsense as in 2022, and the company that led the revolution, OpenAI – which has a strategic agreement with Prisa Media, which publishes EL PAÍS, has chained one reputational crisis after another. And meanwhile, Elon Musk, one of the most catastrophic at the time, has raised more than €5.5 billion for his own artificial intelligence company xAI. This is today’s digital environment and the looming conflict: will we be able to stop the cycle of entanglement?

You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_

 
For Latest Updates Follow us on Google News
 

-

PREV Antillean Villain: “Deep down everything is a problem of the rich against the poor” | Pleasures | S Fashion
NEXT Nancy Dupláa surprised by confessing who is the actor who kisses the best in Argentine fiction