Recently, OpenAI issued a warning that its artificial intelligence model, ChatGPT, is being exploited by cybercriminals for various malicious activities, including the development of malware, dissemination of misinformation, and conducting phishing attacks.

According to a new report, since the beginning of 2024, OpenAI has intervened in over 20 deceptive operations globally, revealing a serious trend of AI misuse. These activities not only include debugging malware but also involve writing content for fake social media accounts and generating enticing phishing messages.

Developer Hacker (3)

OpenAI states that its mission is to ensure these tools serve the common good of humanity, hence the company is committed to detecting, preventing, and combating the misuse of its models. Especially in this election year, OpenAI emphasizes the importance of establishing a robust multi-layered defense system to withstand state-related cyberattacks and covert influence operations. These operations may exploit its models for false propaganda on platforms like social media.

The report also indicates that following the threat report in May 2024, OpenAI continued to intervene in activities including malware debugging, website article writing, and generating false social media content. The complexity of these behaviors varied, ranging from simple content generation requests to complex multi-stage social media analysis and response actions. There was even a case related to AI-generated misinformation.

To better understand how cybercriminals are leveraging AI, OpenAI analyzed these intervened activities, identifying some preliminary trends that could provide crucial information for discussions on AI in a broader threat environment.

OpenAI's analysis mentions that AI provides defenders with powerful capabilities to identify and analyze suspicious behaviors. Although the investigation process still requires human judgment and expertise, these new tools have enabled OpenAI to shorten certain analysis steps from days to minutes. Additionally, cybercriminals typically use OpenAI's models in the intermediate stages of their activities after obtaining basic tools, only deploying the "finished product" on the internet.

While cybercriminals are continually evolving and attempting to use OpenAI's models, there is currently no evidence suggesting this has led to significant breakthroughs in their ability to create new types of malware or build viral audiences. In fact, in election-related content this year, the deceptive activities discovered by OpenAI did not generate widespread social media interaction or sustained attention.

In the future, OpenAI will continue to collaborate with its intelligence, investigation, security research, and policy teams to predict how malicious actors might exploit advanced models for harmful activities and plan enforcement actions accordingly. They will share findings with internal security teams, collaborate with stakeholders, and work with industry peers and the research community to address emerging risks and maintain collective security and safety.

Key Points:  

📌 OpenAI has identified cybercriminals using ChatGPT for malware development and misinformation dissemination.  

🔒 Since the beginning of 2024, OpenAI has successfully intervened in over 20 deceptive operations, safeguarding cybersecurity.  

🤖 Despite being misused, AI models have not significantly enhanced cybercriminals' capabilities in creating malicious software.