Arrr, ye scurvy Congress! Put an end to this newfangled AI trickery that be plunderin' the young'uns!
2024-01-03
Arr! Usin' AI to conjure paintin's of wee ones in harm's way be no jolly jest. Nay, 'tis a dastardly act! Fer each AI likeness, there be actual young'uns sufferin'. May ye scurvy scoundrels face a watery fate!
Sexual predators are now using AI image generators to exploit children. In just one month, users on a dark-web forum shared nearly 3,000 AI-generated images of child sexual abuse. This poses a unique danger, as current child sexual abuse laws are outdated and don't address the risks posed by AI and emerging technologies. Lawmakers need to act quickly to establish legal protections.The national CyberTipline, a reporting system for suspected online child exploitation, received 32 million reports in 2022, a significant increase from 21 million reports two years prior. The rise of image-generating AI platforms is expected to contribute to the growth of this problem. AI platforms are trained on existing visual material, including real children's faces from social media and photographs of actual exploitation. With the abundance of abusive images online, AI can generate even more harmful images.
The most advanced AI-generated images are virtually indistinguishable from unaltered photographs. Investigators have found new images of old victims, "de-aged" celebrities depicted as children in abuse scenarios, and "nudified" images taken from innocent photos of clothed children. Offenders can easily create images of child abuse using text-to-image software, and much of this technology is downloadable, allowing them to generate images offline without being discovered.
Using AI to create pictures of child sex abuse is not a victimless crime. Real children are behind every AI image, and survivors of past exploitation are re-victimized when new portrayals are created using their likeness. Additionally, studies show that those who possess or distribute child sex abuse material often commit hands-on abuse as well.
Furthermore, AI platforms like ChatGPT can be used to lure children by masquerading as a child or teen with youthful language. Criminals can generate realistic messages to manipulate young people into engaging in online interactions. This technology has the capacity to quickly learn and teach grooming techniques, making it even more terrifying.
In order to address these issues, the federal legal definition of child sexual abuse material needs to be updated to include AI-generated depictions. Tech companies should also be required to continuously monitor and report exploitative material, and employees of social media and tech companies should have legally mandated reporting responsibilities. Additionally, the use of end-to-end encryption needs to be reevaluated, as it can enable the storage and sharing of child abuse images.
By taking action now, lawmakers can prevent widespread harm to children in the evolving landscape of AI technology and social media.
Teresa Huizar, CEO of National Children's Alliance, emphasizes the importance of protecting children from these dangers.