Metathe technological giant owned by Mark Zuckerberg, is again in the eye of the storm after revealing that some of its systems of artificial intelligence They would have crossed unthinkable limits. What began as an innovation to improve digital experience, today becomes a scandal that exposes the risks of leaving human dialogue in the hands of a machine.
You may be interested: why the reduction of goal moderation is related to Donald Trump
Artificial Intelligence: What was the mistake of taro chatbots
The company faces a growing controversy for the behavior of its chatbots of artificial intelligence, who were accused of having sexualized conversations with minors on platforms such as WhatsApp, Instagram y Facebook Messenger. These incidents generated concern about the safety of adolescents in digital environments promoted by AI.
Recent research revealed that target chatbots, including one with the Actor John Cena’s voice, They participated in sexual role interactions with users who identified themselves as 14 -year -old teenagers.
In one case, the Bot pretended to be arrested for sexual crimes against minors during a conversation with a user who claimed to be a teenager. Although Meta said that these cases are rare and that safeguards were implemented, the situation revived the ethical and reputational concerns.
In January 2024, the attorney general of New Mexico filed a demand against goal, claiming that the company’s platforms, such as Facebook and Instagram, did not adequately addressed sexual harassment online to minors. Demand states that approximately 100 thousand children face sexual harassment online on these platforms. In addition, it is alleged that goal allowed corporate ads to appear next to content that sexualizes minors.
Goal wrapped in another controversy: what solution proposes
The integration of chatbots of AI in the finishing platforms was part of A strategy to offer more personalized and attractive experiences. However, the lack of adequate controls allowed these bots to participate in inappropriate conversations with minors. Although goal implemented tools to moderate content and protect adolescents, recent incidents suggest that these measures were not enough.
Meta declared that introduced more than 30 tools to support adolescents and their families, including age verification and automatic configuration of private accounts for minors. However, critics argue that these measures are insufficient and that the company must do more to guarantee the safety of minors on their platforms.
The situation raises serious questions about the responsibility of technology companies in the protection of minors online. As IA integrates more in people’s digital lives, it is crucial that companies implement effective safeguards to prevent these technologies from being used harmful.
Related news :