U.S. Prosecutors Push to Combat AI Child Exploitation
Attorneys general from all 50 U.S. states and four territories have issued a joint plea to Congress, urging immediate action against the proliferation of AI-facilitated child sexual abuse material (CSAM). This move is crucial to protect the emotional and physical well-being of children, as well as of their parents.
With AI’s evolving capabilities, swift action is crucial to safeguard vulnerable individuals and ensure a safer digital landscape for all. While some states, including New York, California, Virginia, and Georgia, have made it illegal to share sexually exploitative AI deepfakes, the legal landscape remains inconsistent.
Attorneys general call on Congress to establish a committee to research solutions to combat AI-generated CSAM risks, and to draft legislation that would explicitly cover AI-generated content. This aligns with their concerns regarding AI creating new challenges for child exploitation, making current laws less effective.
“While internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult,” the letter says.
Major social media platforms ban such content, but it persists on various online channels. Recent incidents, such as an app advertising “face-swapping” into suggestive videos across popular platforms, highlight the urgency of this issue.
It is important to note that this initiative is not unique to the United States. International efforts, such as the European AI Code of Conduct, are also under negotiation. While the U.S. attorney general’s plea demonstrates a domestic commitment, it aligns with broader global efforts to combat the issue.