Recently, the team at FAR AI Labs discovered a security vulnerability in the GPT-4 API, successfully jailbreaking this advanced model through methods such as fine-tuning and search enhancement. Researchers were able to prompt the GPT-4 model to generate misinformation, extract private information, and insert malicious URLs. This vulnerability highlights the potential new security risks associated with API functionality expansion, and both users and researchers should approach it with caution.