Recently, several major American AI companies have made solemn commitments to the White House, vowing to prevent their AI products from being used to generate involuntary deepfake pornography and child sexual abuse material.
Six companies, including Adobe, Anthropic, Cohere, Microsoft, OpenAI, and the open-source web data repository Common Crawl, have signed this non-binding commitment, striving to ensure their products are not misused to create harmful gender-based violence imagery.
Image source: Picture generated by AI, provided by Midjourney, an image licensing service.
The White House has stated that with the rapid development of AI technology, cases of image-based gender violence have surged, becoming one of the most concerning forms of AI misuse in recent years. According to the White House, these six AI companies have committed to responsibly acquiring datasets and protecting them from being used to generate gender-related violent content.
However, Common Crawl did not sign the other two commitments. It is well-known that Common Crawl crawls web content and provides it to anyone in need, which has led to accusations of collecting harmful data during previous data cleanups. The second commitment, which Common Crawl did not sign, requires the removal of nude images from AI training datasets when appropriate, something Common Crawl should have signed as it does not collect images.
When asked about the reasons for not signing these two commitments, Rich Skrenta, Executive Director of Common Crawl, said they support the broader goals of the initiative but were only asked to sign one. He noted, "We were not presented with all three options when signing. We may have been excluded from the second and third commitments because we do not conduct model training or produce end-user products."
This marks the second time in over a year that well-known AI companies have made voluntary commitments to the White House. In July 2023, companies including Anthropic, Microsoft, OpenAI, Amazon, and Google met at the White House, promising to test models, share research, and watermark AI-generated content to prevent its use in involuntary deepfake pornography.
Although the U.S. has been slower in implementing AI policies, the EU has already approved some stricter AI regulations. It is worth noting that while these commitments are non-binding, they reflect a degree of self-regulation within the industry.
Key Points:
🌟 Six U.S. AI companies commit to not using their products to generate involuntary deepfake pornography.
🔍 Common Crawl did not sign all commitments, as it does not directly conduct model training.
📅 This commitment is the second voluntary response from AI companies to the White House, reflecting efforts towards industry self-regulation.