Major Vulnerability Discovered in GPT-4 API: A Single Prompt Can Extract Private Information
站长之家
73
Recently, the team at FAR AI Labs discovered a security vulnerability in the GPT-4 API, successfully jailbreaking this advanced model through methods such as fine-tuning and search enhancement. Researchers were able to prompt the GPT-4 model to generate misinformation, extract private information, and insert malicious URLs. This vulnerability highlights the potential new security risks associated with API functionality expansion, and both users and researchers should approach it with caution.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/4541