Microsoft has taken legal action against an organization accused of deliberately developing and using tools to bypass the security barriers of its cloud AI products. According to a lawsuit filed by Microsoft in December last year in the U.S. District Court for the Eastern District of Virginia, a group of ten unnamed defendants is alleged to have used stolen customer credentials and custom software to breach Azure OpenAI services.
Microsoft alleges that the defendants violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal extortion laws by illegally accessing and using Microsoft’s software and servers, with the intent to create "offensive" and "harmful and illegal content." Microsoft did not provide specific details regarding the abusive content generated.
The company is seeking an injunction and "other equitable" relief and damages. Microsoft stated in its complaint that it discovered in July 2024 that customers holding Azure OpenAI service credentials (specifically API keys, which are unique strings used to authenticate applications or users) were being used to generate content that violated the service's acceptable use policy.
Image Source Note: Image generated by AI, licensed by service provider Midjourney
Microsoft's lawsuit states: "The specific manner in which the defendants obtained all API keys used to carry out the improper conduct described in this complaint is unclear, but it appears that the defendants have engaged in a systematic pattern of API key theft, enabling them to steal Microsoft API keys from multiple Microsoft customers."
Microsoft accuses the defendants of using stolen API keys from U.S. customers of Azure OpenAI services to implement a "hacker-as-a-service" scheme. According to the complaint, the defendants created a client tool named de3u and software to handle and route communications from de3u to Microsoft systems to carry out this scheme.
Microsoft claims that De3u allows users to leverage the stolen API keys to generate images using DALL-E (one of the OpenAI models available to Azure OpenAI service customers) without writing their own code. According to the complaint, De3u also attempted to prevent modifications to prompts used for generating images, such as when the text prompts contained words that would trigger Microsoft's content filtering.
As of the time of publication, the repository hosting the de3u project code on GitHub (a Microsoft subsidiary) is no longer accessible.
In a blog post released on Friday, Microsoft stated that the court has authorized it to seize a website that is "critical" to the defendants' operations, which would allow the company to collect evidence, decipher how the defendants purportedly monetize their service, and dismantle any other technical infrastructure they discovered.
Microsoft also stated that it has "taken countermeasures," although the company did not specify what these were, and that it has "added additional security mitigations" for Azure OpenAI services in response to the activities it observed.