As the 2024 U.S. Presidential election approaches, we are entering a new era where generative artificial intelligence (AI) is beginning to make its mark on the electoral stage. Imagine voters' decisions potentially influenced by AI-generated images, videos, and audio—this is no laughing matter! Just recently, former President Trump shared a series of AI-generated images showing Taylor Swift's fans wearing T-shirts in his support, which were initially labeled as satirical content.

QQ20240903-095738.jpg

More concerning, in January, some residents of New Hampshire received deepfake calls attempting to dissuade them from participating in the Democratic primary. With only a few months left until the voting day, experts warn that similar AI-generated disinformation will only escalate, while the technology to identify these contents remains immature. Professor Lance Hunter of the University of Georgia's Department of Political Science points out, "If a portion of people are unaware that it's fake, this could have a substantive impact on the election results."

The applications of generative AI extend beyond chatbots; it can generate various images, videos, and audio. This technology is rapidly spreading globally, accessible to anyone, including those who wish to use it for malicious activities. In fact, such incidents have already occurred in countries like India, Indonesia, and South Korea, although it's unclear whether these contents truly influenced voters' choices. But imagine if fake videos of Trump or Vice President Harris went viral online—the impact on the vote could be immense!

The Cybersecurity and Infrastructure Security Agency (CISA) of the U.S. Department of Homeland Security is already on high alert for the threats posed by generative AI. Senior Advisor Kate Conley of CISA states, "Foreign adversaries have targeted U.S. elections and infrastructure in previous elections, and we anticipate this threat will persist in 2024." She emphasizes that CISA is providing guidance to state and local election officials on external influence operations and disinformation.

So, how can we prevent the chaos caused by generative AI before the elections? The challenge lies in the difficulty of easily distinguishing between real and fake generated content. With technological advancements, AI-generated content has evolved from the bizarre "15-fingered" images to lifelike representations.

Last July, the Biden administration secured voluntary commitments from companies like Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI to address potential risks posed by AI. However, these agreements are not legally binding. Professor Hunter believes that bipartisan legislation at the federal level will emerge in the future, specifically targeting fake content in political campaigns.

Social media platforms like Meta, TikTok, and X can also play a role in preventing the spread of fake media, such as clearly labeling content created by generative AI or even banning AI-generated content. However, the effectiveness of existing detection tools is subpar. Some tools have even been criticized as "snake oil," providing only vague judgments like "85% likely" rather than definitive answers.

With the election day fast approaching and generative AI technology rapidly advancing, there is concern that before voting begins, malicious actors might exploit this technology to create more online chaos. The final outcome of the election remains to be seen.