Microsoft recently released a white paper that delves into the malicious exploitation of generative artificial intelligence (AI), including issues such as fraud, child sexual abuse materials, election manipulation, and non-consensual private images. The company emphasizes that these issues are not only technical challenges but also significant threats to society.
Image Source Note: The image was generated by AI, provided by the image licensing service Midjourney
According to Microsoft's white paper, criminals are increasingly exploiting the capabilities of generative AI to commit nefarious acts. These include using AI-generated misinformation for fraud, creating child sexual abuse materials, manipulating elections through deepfake technology, and producing non-consensual intimate images, particularly targeting women. Hugh Milward, Corporate Vice President of External Affairs at Microsoft, stated: "We must never forget that the misuse of AI has profound impacts on real people."
The white paper specifically addresses UK policymakers, proposing a comprehensive solution based on six core elements to address these issues. These elements include a robust security architecture, permanent source and watermarking tools for media, modernized laws to protect the public, strong collaboration between industry, government, and civil society, protection against service abuse, and public education.
In specific recommendations for UK policymakers, Microsoft calls for AI system providers to inform users that the content is AI-generated when they interact with the system. Additionally, Microsoft suggests implementing advanced source-marking tools to label synthetic content, and the government should set an example by verifying the authenticity of its media content. Microsoft also emphasizes the need for new laws to prohibit fraudulent activities through AI tools to protect the integrity of elections. Furthermore, legal frameworks to protect children and women from online exploitation should be strengthened, including making the creation of sexual deepfakes a criminal offense.
Microsoft also points out that storing metadata indicating whether media is AI-generated is crucial. Projects similar to this are already being advanced by companies like Adobe, aimed at helping people identify the origin of images. However, Microsoft believes that standards like "content credentials" require policy measures and public awareness to be effective.
Moreover, Microsoft collaborates with organizations like StopNCII.org to develop tools for detecting and removing abusive images. Victims can seek redress through Microsoft's central reporting portal. For young people, additional support is provided by the National Center for Missing and Exploited Children's "Take It Down" service. Milward stated: "The issue of AI abuse may persist, so we need to double our efforts and engage in creative collaboration with tech companies, charitable partners, civil society, and governments to address this issue. We cannot do it alone."
Key Points:
🛡️ Microsoft releases a white paper, revealing various ways generative AI is maliciously used, including fraud and election manipulation.
📜 Addressing UK policymakers, Microsoft proposes a six-element solution, calling for comprehensive protection through laws and technology.
🤝 Emphasizing the importance of collaboration, Microsoft calls for a collective effort to tackle the challenges posed by AI abuse.