A Glitch That Allowed ChatGPT to Escape! Disordered Prompts Enable LLM to Rapidly Generate Ransomware, Jim Fan Stunned
新智元
35
Foreign netizens have discovered a novel jailbreak technique, utilizing disordered prompts to bypass traditional security filters, enabling ChatGPT to generate ransomware. Researcher Jim Fan expressed amazement that the GPT model could comprehend disordered words. This technique exploits the human brain's ability to understand disordered phrases and words, successfully achieving a jailbreak and drawing attention from the community.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/643