OpenAI recently released a report emphasizing the prohibition of malicious use of its AI service, ChatGPT. With the surge in ChatGPT users, OpenAI has noticed illicit activities on the platform and has taken action, suspending dozens of suspicious accounts. Recent data shows ChatGPT's weekly active users have surpassed 400 million, with over 100 million new users in just three months, raising ethical and legal concerns regarding its use.

ChatGPT

The report clearly states that any use of its tools for fraudulent or malicious activities is prohibited. Recently, the company investigated several cases involving fake job postings, identifying and banning multiple related accounts. For instance, one account was found to be creating and disseminating fabricated news reports to smear the United States, even publishing these articles in Latin America under the guise of Chinese media outlets.

Additionally, accounts originating from Cambodia were using ChatGPT to translate and generate comments, assisting in "romance scams" on social media platforms like X, Facebook, and Instagram.

OpenAI stated that it has shared its findings with other industry companies, such as Meta, to combat misuse of ChatGPT. Through these actions, OpenAI aims to maintain the platform's reputation and protect users from scams.

As a leading AI company, OpenAI will continue monitoring the use of its service to ensure the technology isn't misused. The company emphasizes that maintaining user trust is its top priority, and it will further strengthen its monitoring and management of malicious accounts in the future.

Key Takeaways:

🌟 OpenAI has suspended dozens of suspicious accounts to combat malicious use of ChatGPT.

📰 The report highlights accounts using ChatGPT to create fake news and job postings for fraudulent purposes.

🔗 OpenAI is sharing its findings with other tech companies to maintain platform integrity and safety.