The research team at Hong Kong Baptist University has demonstrated a jailbreak attack on large language models (LLM) induced by deep hypnosis, revealing security vulnerabilities where these models may lose their self-defense mechanisms in response to human instructions. The team's proposed DeepInception constructs a new type of instruction prompt through personalized characteristics, achieving adaptive jailbreaking and consistently outperforming previous jailbreak methods. The study calls for increased attention to LLM self-jailbreaking, introducing the concept through personality and psychological traits. Experiments have shown the urgency of improving large model defense mechanisms, as emphasized by DeepInception. The main contributions of the research include proposing the concept of jailbreak attacks based on LLM personality, providing a prompt template for DeepInception, and experimentally demonstrating its leading effectiveness in jailbreaking. This study has sparked new concerns about LLM security, offering valuable insights for understanding and preventing LLM jailbreaking through a unique psychological perspective.