At the beginning of the year, there were widespread concerns about how generative artificial intelligence could interfere with global elections, spreading propaganda and misinformation. However, a recent report released by Meta indicates that these concerns have not materialized on its platforms. Meta states that the impact of generative AI on election-related information on social platforms like Facebook, Instagram, and Threads is extremely limited.
Image source note: The image is generated by AI, authorized by the service provider Midjourney.
According to Meta's research, the report covers significant elections in various countries and regions, including the United States, Bangladesh, Indonesia, India, Pakistan, the European Parliament, France, the United Kingdom, South Africa, Mexico, and Brazil. Meta points out that while there were some confirmed or suspected uses of AI during the election period, the overall number remains low, and existing policies and processes are sufficient to mitigate the risks posed by generative AI content. The report indicates that during the aforementioned elections, AI content related to elections, politics, and social issues accounted for less than 1% of the fact-checked misinformation.
To prevent deepfake images related to the elections, Meta's Imagine AI image generator rejected nearly 590,000 requests to create content involving figures such as Trump, Vice President Harris, Governor Walz, and President Biden a month before the election day. Additionally, Meta found that accounts attempting to spread propaganda or misinformation had only a slight increase in productivity and content generation when using generative AI.
Meta emphasizes that the use of AI has not hindered its ability to combat these covert influence activities, as the company focuses on the behavior of these accounts rather than the content they publish, regardless of whether it is generated by AI. Furthermore, Meta announced that it has globally removed about 20 new covert influence operations to prevent foreign interference. Most of these networks targeted had no real audience, with some even artificially inflating their popularity through fake likes and followers.
Meta also criticized other platforms, stating that false videos related to the U.S. elections often appear on platforms like X. Meta indicated that as it summarizes the lessons learned this year, it will continue to review its policies and will announce any changes in the coming months.
Key points:
📰 Meta's report shows that AI content accounts for less than 1% of election-related misinformation.
🚫 The Imagine AI image generator rejected nearly 590,000 deepfake image requests.
🌍 Meta has globally targeted about 20 covert influence networks to prevent foreign interference.