The attorneys general from all 50 states have banned together and sent an open letter to Congress, asking for increased protective measures against AI-enhanced child sexual abuse images, as originally reported by AP. The letter calls on lawmakers to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically.”

The letter sent to Republican and Democratic leaders of the House and Senate also urges politicians to expand existing restrictions on child sexual abuse materials to specifically cover AI-generated images and videos. This technology is extremely new and, as such, there’s nothing on the books yet that explicitly places AI-generated images in the same category as other types of child sexual abuse materials.

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

Using image generators like Dall-E and Midjourney to create child sexual abuse materials isn’t a problem, yet, as the software has guardrails in place that disallows that kind of thing. However, these prosecutors are looking to the future when open-source versions of the software begin popping up everywhere, each with its own guardrails, or lack thereof. Even OpenAI CEO Sam Altman has stated that AI tools would benefit from government intervention to mitigate risk, though he didn’t mention child abuse as a potential downside to the technology.

The government tends to move slowly when it comes to technology, for a number of reasons, as it took Congress several years before taking the threat of online child abusers seriously back in the days of AOL chat rooms and the like. To that end, there’s no immediate sign that Congress is looking to craft AI legislation that absolutely prohibits generators from creating this kind of foul imagery. Even the European Union’s sweeping Artificial Intelligence Act doesn’t specifically mention any risk to children.

South Carolina Attorney General Alan Wilson organized the letter-writing campaign and has encouraged colleagues to scour state statutes to find out if “the laws kept up with the novelty of this new technology.”

Wilson warns of deepfake content that features an actual child sourced from a photograph or video. This wouldn’t be child abuse in the conventional sense, Wilson says, but would depict abuse and would “defame” and “exploit” the child from the original image. He goes on to say that “our laws may not address the virtual nature” of this kind of situation.

The technology could also be used to make up fictitious children, culling from a library of data, to produce sexual abuse materials. Wilson says this would create a “demand for the industry that exploits children” as an argument against the idea that it wouldn't actually be hurting anyone.

Though the idea of deepfake child sexual abuse is a rather new one, the tech industry has been keenly aware of deepfake pornographic content, taking steps to prevent it. Back in February, Meta, OnlyFans and Pornhub began using an online tool called Take It Down that allows teens to report explicit images and videos of themselves from the Internet. This tool is used for regular images and AI-generated content.

This article originally appeared on Engadget at https://www.engadget.com/attorneys-general-from-all-50-states-urge-congress-to-help-fight-ai-generated-csam-184938825.html?src=rss