Recently, a new controversy has emerged in a federal lawsuit regarding a Minnesota law on "using deepfake technology to influence elections." The plaintiff's legal team pointed out in the latest legal documents that the sworn statements supporting this law may contain text generated by artificial intelligence.
Image Source Note: Image generated by AI, image authorized by service provider Midjourney
According to reports from the Minnesota Reformers, the state's Attorney General Keith Ellison had requested relevant evidence from Jeff Hancock, the founding director of the Stanford Social Media Lab. However, several studies mentioned in Hancock's sworn statement lack substantial evidence and show signs of possible AI "hallucinations."
Hancock's sworn statement cited a study published in 2023 in the Journal of Information Technology and Politics, titled "The Impact of Deepfake Videos on Political Attitudes and Behavior."
However, related reports indicated that there is no record of this study in that journal or any other publication. Additionally, another study mentioned in the sworn statement, titled "The Illusion of Deepfakes and Authenticity: The Cognitive Processes Behind Acceptance of Misinformation," also lacks empirical evidence.
In response, Minnesota State Representative Mary Franson and conservative YouTuber Christopher Khols's lawyer stated in the documents: "These citations clearly exhibit characteristics of AI 'hallucinations,' suggesting that at least part of the content was generated by large language models like ChatGPT." They further noted that such circumstances call into question the credibility of the entire sworn statement, especially since many of its arguments lack methodological and analytical support.
Hancock has not responded to this incident. This matter has sparked discussions about the application of artificial intelligence in the legal field, particularly regarding how to ensure the accuracy and reliability of information in matters involving public interest and elections.
This incident not only raises concerns about the impact of deepfake technology but also provides new considerations for the legal community in handling AI-related evidence. Effectively discerning and verifying information sources has become an important challenge that legal practice must face.
Key Points:
📰 The sworn statement regarding Minnesota's deepfake bill is questioned as AI-generated text.
🔍 The legal team pointed out that the cited studies do not exist, indicating possible AI "hallucinations."
⚖️ This incident has sparked widespread discussion about the use of artificial intelligence in legal documents, focusing on the accuracy of information.