AI image training dataset found to include child sexual abuse imagery

Logo of LAION, which created the LAION datasets
Photo Illustration by Rafael Henrique / SOPA Images / LightRocket via Getty Images

A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.

LAION-5B, a dataset used by Stable Diffusion creator Stability AI and Google’s Imagen image generators, included at least 1,679 illegal images scraped from social media posts and popular adult websites.

The researchers began combing through the LAION dataset in September 2023 to investigate how much, if any, child sexual abuse material (CSAM) was present. They looked through hashes or the image’s identifiers. These were sent to CSAM detection platforms like PhotoDNA and verified by the Canadian Centre for Child Protection.

The dataset does not keep...

Continue reading…



source https://www.theverge.com/2023/12/20/24009418/generative-ai-image-laion-csam-google-stability-stanford

Comments

Popular posts from this blog

Google Assistant will soon be on a billion devices, and feature phones are next

TP-Link unveils its first family of Wi-Fi 6 routers

Vizio returns to CES with its most advanced 4K TV ever and support for Apple’s AirPlay 2