Recently, the Internet Watch Foundation (IWF) issued a warning that the frequency of AI-generated child sexual abuse images (CSAM) appearing on the open web has reached a "critical point." The security oversight agency stated that the number of illegal contents created by AI they discovered in the past six months has exceeded the total of the previous year.
Image source note: The image is generated by AI, authorized service provider Midjourney
Derek Ray-Hill, the interim CEO of IWF, said that the complexity of these images suggests that the AI tools used might be trained on images and videos of real victims. He pointed out: "The situation in recent months indicates that the issue is not slowing down but actually worsening." Analysts say that the AI-generated content has reached a "critical point," causing confusion for security oversight agencies and authorities when determining whether certain images involve real children in need of help.
In the past six months, IWF has taken action on 74 reports of AI-generated child sexual abuse material, a number that has increased compared to the 70 cases in March of the previous year. Notably, these AI-generated contents mainly appear on publicly accessible networks rather than the dark web. IWF noted that more than half of the AI-generated content is hosted on servers in Russia and the United States, with significant amounts also found in Japan and the Netherlands.
IWF has encountered cases involving "deepfake" videos during their handling of reports, where adult pornographic content was altered to depict child sexual abuse. Additionally, some AI tools were found to be used to "undress" normal-looking children's photos found online. The organization also stated that 80% of the illegal AI-generated images reported by the public on forums or AI galleries came from ordinary users.
Meanwhile, social media platform Instagram has announced new measures to address the issue of "sextortion." The platform will introduce a new feature to blur any nude images received in private messages and remind users to be cautious when sending any private messages containing nude images. This feature is enabled by default for teen accounts and allows users to choose whether to view the blurred images.
Key Points:
🔍 In the past six months, the number of AI-generated child sexual abuse images discovered by IWF has exceeded the total of the previous year, indicating a worsening issue.
⚖️ The high complexity of AI-generated images, potentially trained on real victim materials, makes it difficult for regulatory bodies to distinguish.
📱 Instagram introduces a new feature to blur nude images, helping users to guard against sextortion risks.