As the 2024 U.S. presidential election approaches, OpenAI released a blog post on Friday stating that ChatGPT has rejected over 250,000 requests to generate images of political candidates in the month leading up to the election. These requests included attempts to create images of President-elect Trump, Vice President Harris, Vice Presidential candidate Vance, current President Biden, and Minnesota Governor Walz.

OpenAI noted in the blog that ChatGPT has implemented multiple security measures to refuse the generation of images of real people, including political figures. These protective measures are particularly crucial during elections and are part of the company's broader efforts to prevent its tools from being used for misleading or harmful purposes.

Additionally, ChatGPT has partnered with the National Association of Secretaries of State (NASS) to direct election-related queries to CanIVote.org, maintaining political neutrality. Regarding inquiries about election results, the platform advises users to consult news agencies such as The Associated Press and Reuters. Recently, OpenAI also had to ban an external influence operation known as Storm-2035, which was attempting to disseminate influential Iranian political content.

OpenAI stated that it will continue to monitor ChatGPT to ensure the accuracy and ethical nature of its responses. This year, the company also commended the Biden administration's policy framework on national security and artificial intelligence technology.

Key Points:

🛡️ ChatGPT rejected over 250,000 requests to generate images of political candidates in the month before the election.

🤖 OpenAI has implemented multiple security measures to prevent the generation of images of real people, especially during the election period.

🌐 ChatGPT collaborates with the National Association of Secretaries of State to maintain political neutrality and directs users to reliable sources of election information.