Recently, security firm PromptArmor exposed a severe security vulnerability in Slack AI, stating that it is susceptible to malicious prompt injection attacks. Slack AI is an add-on service within Salesforce's team communication service, primarily used for generative tools such as summarizing long conversations, answering questions, and aggregating information from infrequently accessed channels. However, PromptArmor claims that this service is not as secure as it claims to be.

Hackers Vulnerability Leak Security

The crux of the vulnerability lies in Slack's allowance for users to query data from both public and private channels, including public channels to which users are not subscribed. PromptArmor believes that while Slack considers this normal behavior, it provides an opportunity for attacks.

The specific attack method is as follows: An attacker can place an API key in a private channel, visible only to users within that channel. Subsequently, the attacker creates a public channel and inputs a malicious prompt. When Slack AI processes these queries, the AI incorporates the attacker's prompt content as part of the context and generates content accordingly. For example, an attacker might add a link in the prompt that, when clicked, sends the API key data to the attacker's server.

Worse still, Slack added file-sharing functionality in its August 14th update, meaning files in channels and direct messages could also become targets for leakage. Attackers could even use PDF files with hidden malicious commands to conduct attacks; once a user uploads such a file, the consequences could be dire.

PromptArmor advises Slack workspace administrators to restrict access to documents until the issue is resolved. Although PromptArmor has informed Slack of its findings, Slack states that messages in public channels can be searched and viewed by all members of the workspace, even if they have not joined the channel. PromptArmor believes that Slack does not fully understand the risks posed by prompt injection.

Slack has yet to respond to this vulnerability, and user concerns about the vulnerability are growing.

Key Points:

🌐 Slack AI has a vulnerability; malicious prompt injection may lead to private channel data leakage.

🔑 Attackers can steal confidential information by creating public channels and malicious prompts.

📄 With the update, uploaded files can also become targets of attacks, posing a greater risk.