The Meta AI logo, a blue and pink circle, has been installed in the WhatsApp of millions of users for a month. Located in the lower right corner of the mobile screen, pressing the button opens a conversation with the artificial intelligence tool (AI). The way to withdraw it from there is unknown: at most, the conversation with AI can be eliminated, as can be done with those of the rest of the contacts. Despite this, a WhatsApp spokesman, a company controlled by Meta, has told the BBC that the AI tool is “totally optional.”
The goal button has been deployed only in some countries, including Spain, but does not appear to all users. “It may not be available for you yet, even if other users of your country have access to it,” says the company’s website. Goal AI is able to answer questions, in the manner of any other chatbot based on AI, with the peculiarity that it is not necessary to leave WhatsApp. The same thing that already did chatgpt when he launched his version for WhatsApp. The goal button AI is also shown to many users when they handle the Instagram mobile app or Facebook Messenger, the latter very popular in the US.
When a conversation with goal AI opens for the first time, a note appears that you say, saying “goal AI is an optional goal service that uses the pair models to offer answers.” However, the company has not said so far how the floating logo of the blue and pink circuit can be removed from the screen.
It is also said that “you can only read the messages that people share with it”, that is, they cannot access the other conversations of the app. And warn: “Do not share information, especially delicate issues about yourself or other people who do not want the AI to retain and use.” Later, it is also noted that “AI generates messages” and that these “can be imprecise or inappropriate.”
Goal announced last week that it will begin to train its AI in June “using the public content – such as publications and comments – that adults share in their products within the European Union.” The company argues that this will help improve the service, and that users will be notified what types of data will be used in that training and how they can oppose their data being used in this way. As that process is something farragoso, the Citizen 8 collective has prepared a script that is executed with only two clicks and that automatically sends a personalized communication saying that the user refuses to give their data for that training.
The company has already announced last summer its intention to take advantage of the data generated in its social networks to train its IAL models, but the Irish data protection authority stopped the initiative.
The technological giant ensures that such training “does not use the private messages of people with friends and family to train their generative AI models,” nor does it with data from children under 18.
The first fine of the DMA
This same week, the European commission has announced the first fines of the Digital Markets Regulation (DMA): 500 million to Apple and 200 to Meta. In the case of the latter, the sanction is due to the “accept or pay” model: the company forces its users to pay a monthly subscription if they do not want to give to the company the information generated by its activity so that the target does business with it. The commission concluded that this model is illegal; Meta modified it and now the EU authorities are examining their legality.
“Apple and goal have breached the DMA by applying measures that reinforce the dependence of business users and consumers of their platforms. As a result, we have taken firm but balanced application measures against both companies, based on clear and predictable standards. All the companies present in the EU must comply with our laws and respect the European values,” explained in a statement the vice president of the European Commission of competition.