Recently, Brad Smith, Microsoft's Vice Chairman and President, called on the U.S. Congress to enact legislation targeting AI-generated deepfake technology to protect citizens from scams and manipulation. He noted that while the tech industry and nonprofit organizations have taken some steps to address this issue, our laws must keep pace with the times to combat deepfake fraud.
Image Source: This image was generated by AI, provided by the image licensing service Midjourney.
Smith emphasized that enacting a comprehensive "Deepfake Fraud Act" is one of the crucial measures the U.S. must take. This law would provide a legal framework for law enforcement to prosecute cybercriminals who use this technology for fraud. Additionally, he called on lawmakers to update federal and state laws regarding child sexual abuse and non-consensual intimate images to include AI-generated content.
It is noteworthy that the Senate recently passed a bill specifically targeting non-consensual pornographic deepfake content, allowing victims to sue the creators of such content. This bill was introduced following a scandal where middle school students used AI to generate lewd images of their female classmates. To address these issues, Microsoft has also had to strengthen security controls in its own AI products to avoid similar security vulnerabilities.
Smith pointed out that the private sector has a responsibility to innovate and implement safeguards to prevent the misuse of AI. Although the Federal Communications Commission (FCC) has banned AI-generated robocalls, generative AI still makes it very easy to create fake audio, images, and videos. This issue has become particularly severe in the lead-up to the 2024 presidential election. For example, Elon Musk recently shared a deepfake video on social media that appeared to violate the platform's policy on synthetic and manipulated media.
To enhance public trust in information, Smith urged Congress to require AI system providers to use advanced provenance tools to mark synthetic content. He stated that such measures would help the public better understand whether content is AI-generated or manipulated.
Key Points:
🌐 Microsoft calls on Congress to enact the "Deepfake Fraud Act," providing legal support to combat AI-generated fraud.
👶 Lawmakers need to update laws related to child sexual abuse and non-consensual images to include AI-generated content.
🔍 Smith proposes requiring AI content to be marked, enhancing public trust in information.