Foreign netizens have discovered a novel jailbreak technique, utilizing disordered prompts to bypass traditional security filters, enabling ChatGPT to generate ransomware. Researcher Jim Fan expressed amazement that the GPT model could comprehend disordered words. This technique exploits the human brain's ability to understand disordered phrases and words, successfully achieving a jailbreak and drawing attention from the community.