Less than two months after Anthropic launched the Computer Use feature that allows Claude to control devices, security researchers have discovered potential security vulnerabilities. The latest findings disclosed by cybersecurity expert Johann Rehnberger are shocking: through simple prompt injections, AI can be induced to download and run malicious software.

Rehnberger has named this exploitation method "ZombAIs." In his demonstration, he successfully made Claude download Sliver—a command and control framework originally designed for red team testing, but now widely used by hackers as a malicious software tool. Even more concerning, this is just the tip of the iceberg. Researchers have pointed out that AI can also be induced to write, compile, and execute malicious code, making the attack methods extremely difficult to defend against.

Hacker, Code, Programmer

Image Source Note: Image generated by AI, image licensed by Midjourney

It is worth noting that these security risks are not unique to Claude. Security experts have found that the DeepSeek AI chatbot also has a prompt injection vulnerability, which could allow attackers to take over user computers. Additionally, large language models may output ANSI escape codes, leading to the so-called "Terminal DiLLMa" attacks, which could hijack system terminals.

In response, Anthropic has already warned users in its beta statement: "The Computer Use feature may not always function as expected, and we recommend taking precautions to isolate Claude from sensitive data and operations to avoid risks related to prompt injection."

This incident serves as a reminder: as AI technology rapidly advances, security issues cannot be overlooked. Developers need to find a balance between functionality and security, while users must enhance their security awareness and take necessary protective measures when using AI tools.