Google, Meta and Microsoft join forces to combat images of child sexual abuse created with AI

Google, Meta and Microsoft join forces to combat images of child sexual abuse created with AI
Google, Meta and Microsoft join forces to combat images of child sexual abuse created with AI

The companies that are part of the project will have to subject their AIs to robust tests to verify that they cannot generate harmful images. (Illustrative Image Infobae)

Leading companies such as Google, Meta, Microsoft, OpenAI and Amazon have joined forces in a pioneering project that aims to establish strong regulations to prevent AI from generating images of child sexual abuse.

This project, promoted by Thorn and All Tech is Human, seeks to focus the transformative power of AI towards progress that protects the rights and integrity of minors.

The initiative highlights the importance of using clean databases for training simulation models. AI. Companies have made a commitment to ensure that the data used to feed their systems does not contain images that exploit minors.

Besides, Technology companies will need to implement robust control mechanisms and rigorous testing to demonstrate that their generative AI models are free of the ability to create images of child sexual abuse.

Google along with other technology companies committed to using clean databases. REUTERS/Steve Marcus

This project also reinforces the idea that technological innovation must go hand in hand with a strong development ethic. In its quest to explore the limits of the possible, AI must not overstep the boundaries of what is morally acceptable.

The magnitude of the problem that these guidelines seek to mitigate is alarming. In the United States alone, more than 104 million files suspected of containing child sexual abuse material were reported in 2023, according to data provided by Thorn.

This figure reflects the urgency of establishing effective digital barriers against the proliferation of such content on the Internet.

Generative artificial intelligence, although it has enormous potential for innovation, also represents a significant risk to the protection and safety of minors, which is why it deserves urgent attention.

AI could affect police investigations of these types of crimes. (Illustrative image Infobae)

According to the document that supports the aforementioned project, AI can affect the victim identification process with the proliferation of false images.

Additionally, this technology can be used to intensify harassment practices, such as grooming and sextortion, by producing content that simulates explicit actions and poses without the need for explicit initial material.

This type of content could also have the potential to increase physical contact crimes. “The normalization of this material also contributes to other harmful outcomes for children,” the document reads.

Generative AI models can be used by predators to share tips on how to effectively commit abuse, including ways to coerce victims, destroy evidence, or manipulate abuse material to avoid detection.

The generation of non-consensual sexual images with AI is a phenomenon that extends to adults. (Illustrative image Infobae)

The phenomenon of the creation and distribution of non-consensual images using artificial intelligence has expanded beyond minors, also affecting adults, including public figures and celebrities.

A recent investigation by Forbes uncovered how eBay, the well-known online auction platform, had become a marketplace for the sale of fake explicit content from well-known celebrities.

Using advanced artificial intelligence and editing tools such as Photoshop, sexually explicit images of at least 40 famous personalities were generatedincluding Margot Robbie, Selena Gómez and Jenna Ortega.

Celebrities are victims of selling non-consensual sexual deepfakes. (Instagram: Taylor Swift, Selena Gomez and Jenna Ortega)

Surprisingly, An eBay store offered for sale more than 400 products containing altered photos of celebrities such as Taylor Swift, Selena Gomez, Katy Perry, Ariana Grande and Alexandria Ocasio-Cortez, presenting them in compromising situations.

eBay’s reaction to Forbes’ alert was immediate, proceeding to remove hundreds of these fake photographs and suspend the accounts of the sellers involved.

This incident underscores the urgency of implementing more rigorous measures to combat the dissemination of digital content generated without consent, highlighting the need for a stricter regulatory framework that protects the integrity and privacy of all people in the digital environment.

 
For Latest Updates Follow us on Google News
 

-

PREV Dead by Daylight celebrates its eighth anniversary with a new mode and several spin-offs
NEXT Caracol launches two experiences with artificial intelligence at LA Screenings