Microsoft has taken legal action against a group the company claims deliberately developed and used tools to bypass the guardrails of its cloud AI products.
According to complaint filed by the company in December in the US District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and custom software to hack Azure OpenAI, Microsoft’s fully managed service powered by developer ChatGPT’s technologies OpenAI.
In the complaint, Microsoft accuses the defendants — referred to only as “Does,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and a federal extortion statute through unlawful access and use of Microsoft software and servers to “create offensive” and “harmful and illegal content.” Microsoft did not provide specific details about the abusive content that was created.
The company is seeking injunctive relief and “other equitable” relief and damages.
In the complaint, Microsoft says it discovered in July 2024 that customers with Azure OpenAI service credentials — specifically API keys, the unique strings of characters used to authenticate an application or user — were being used to create content that violated its acceptable use policy of the service. Then, through an investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint.
“The exact manner in which Defendants obtained all of the API Keys used to conduct the misconduct described in this complaint is unknown,” Microsoft’s complaint states, “but it appears that Defendants have engaged in a pattern of systematic API key theft that allowed them to steal Microsoft API keys from many Microsoft customers.”
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to US-based customers to create a “hacking-as-a-service” system. According to the complaint, to pull off this scheme, the defendants created a client tool called de3u, as well as software to process and route communications from de3u to Microsoft systems.
De3u allowed users to leverage stolen API keys to build images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code, Microsoft claims. De3u also tried to prevent the Azure OpenAI Service from revising the prompts used to generate images, according to the complaint, which can happen, for example, when a text message contains words that trigger Microsoft’s content filtering.
A repository containing de3u project code hosted on GitHub — a company owned by Microsoft — is no longer accessible at the time of publication.
“These capabilities, combined with the defendants’ illegal API programmatic access to the Azure OpenAI service, allowed the defendants to reverse engineer Microsoft’s content circumvention and abuse measures,” the complaint states. “The defendants knowingly and intentionally accessed the protected Azure OpenAl service on computers without authorization and as a result of this conduct caused damage and loss.”
In one blog post Published Friday, Microsoft says a court has authorized it to seize a website “tool” the defendants run that will allow the company to gather data, decipher how the defendants’ alleged services make money and disrupt any additional technical infrastructure it finds. .
Microsoft also says it “implemented countermeasures,” which the company did not specify, and “added additional security mitigations” to its Azure OpenAI service to target the activity it observed.
