As large language models are increasingly applied in productivity domains, the security risks they face are becoming more pronounced. Prompt injection attacks are a type of adversarial attack that can guide LLMs to generate harmful content, posing a serious threat to system security. This article delves into 12 strategies of adversarial prompt injection attacks and proposes a solution to enhance LLM security by utilizing red team datasets. Every internet user should remain vigilant and work together to maintain cyber security.