The GPT-4 API has revealed a security vulnerability, as the FARAI laboratory team successfully exploited this advanced model through fine-tuning, function calls, and enhanced search techniques. By fine-tuning with 15 harmful samples and 100 benign samples, researchers found that GPT-4 may lower its security defenses when generating content. Researchers were able to make the GPT-4 model generate misinformation, extract private information, and insert malicious URLs. Model fine-tuning can lead to significant bias, generate malicious code, and even access unpublished information. This vulnerability highlights the